Published on

Using ChatGPT To Write a Travel Bot On Top of ChatGPT

Authors
  • avatar
    Name
    Sedky Haider
    Twitter

I used ChatGPT to write a Digital Travel Agent built on top of ChatGPT. In a matter of hours.

xzibit meme

This blog walks through the process. Alternatively, you can visit here to see it in action: raisin.sedkodes.com. You can view the finished code here.

And because I'm a massive devOps nerd, I'm going to show you how to deploy it as well.

OK, first things first, I found a starter template to use, because I hate the boiler plate bit.


HINT: I could've used ChatGPT for finding the starter template it as well.. after all, ChatGPT has replaced Google Search for everything else I do.


Okay, so first I asked ChatGPT to modify the entire repo:

initial prompt

Okay, not very effective yet, our jobs are safe, or my prompt sucked. ChatGPT provided a bunch of outputted python files - there's not a single python file in that repo.
There must be a similar project written in Python in ChatGPT's training data

Let's try again, but be more specific.

I asked it to start by describing my code, and dumping the contents of the messages component:

Can you describe what this code does:

<copy paste dumped the entire "components/messages.js" file>

And the response:

This is a React component in JavaScript that displays a list of messages in a chat-like interface. The component is called Messages and it receives three props: events, isProcessing, and onUndo.

The events prop is an array of objects representing messages in the chat. Each object can have a prompt property or an image property. If an object has an image property, the component displays the image along with an optional "Undo" button. If an object has a prompt property, the component displays the prompt as a message from the user.

The isProcessing prop is a boolean value indicating whether the component is currently processing an event. If isProcessing is true, the component displays a loading spinner.

The onUndo prop is a callback function that is called when the "Undo" button is clicked. The function receives the index of the event to undo as a parameter.

The Messages component uses the useRef hook to create a reference to an HTML element that will be used to scroll the chat to the bottom. It also uses the useEffect hook to scroll the chat to the bottom whenever a new message is added, except for the first two messages.

The component returns a section element containing the messages, the loading spinner (if isProcessing is true), and the reference to the HTML element used for scrolling. Each message is displayed using the Message component, which wraps the content of the message and adds a sender label. If the message is an image, the Image component is used to display it. If the message has an "Undo" button, it is displayed along with the button.

Holy crap! That is an incredibly articulate and easy to understand explanation. If it's able to tokenize the code that well, perhaps it can change the code:

Let's ask:

second step

Whoa! It spit out a perfectly-working version of the file without any references to the code. It even included this explanation afterwards:

In this modified code, the Image component has been removed, along with the logic for displaying and handling image-related events. The code now only handles prompt events.

The "What should we change?" prompt is now displayed if there are no events, or if the user has entered a prompt that was processed. The "Undo" button is displayed only if the latest event is a prompt and if the onUndo callback is provided.

Finally, the Messages component still uses the useRef and useEffect hooks to scroll to the bottom of the chat when new events are added. The component still returns a section element containing the messages, the loading spinner (if isProcessing is true), and the reference to the HTML element used for scrolling.

Perfect. Let's switch context, and give it another file in the project to consume:

third step

And again, a perfect explanation (feel free to skip):

fourth step

Now that we've given it the context, let's ask it to code again:

second step

It generated half the file and stopped working. This is because the "free" version I'm on ran out of tokens. ChatGPT4 is able to process 32000 tokens. So in theory, the next bit of back and forth is avoidable.

So, I have to try another approach, given my stubbornness, and ask it to modify a specific part of the file:

sixth step

And it gave me this code block:

let inputComponent = null;
  switch (promptType) {
    case "date":
      inputComponent = (
        <input
          id="prompt-input"
          type="date"
          name="prompt"
          value={prompt}
          onChange={(e) => setPrompt(e.target.value)}
          className={`block w-full flex-grow${
            disabled ? " rounded-md" : " rounded-l-md"
          }`}
          disabled={disabled}
        />
      );
      break;
    case "time":
      inputComponent = (
        <input
          id="prompt-input"
          type="time"
          name="prompt"
          value={prompt}
          onChange={(e) => setPrompt(e.target.value)}
          className={`block w-full flex-grow${
            disabled ? " rounded-md" : " rounded-l-md"
          }`}
          disabled={disabled}
        />
      );
      break;
    case ...

So not bad, but certainly not ideal. there is a lot of needless repitition. But that's easy enough, the rest of the file is complete, including the rendered HTML:

return (
    <form onSubmit={handleSubmit} className="animate-in fade-in duration-700">
      <Message sender="replicate" isSameSender>
        <label htmlFor="prompt-input">
          {isFirstPrompt
            ? "What should we change?"
            : "What should we change now?"}
        </label>
      </Message>

      <div className="flex mt-8">{inputComponent}</div>

      {!disabled && (
        <button
          className="bg-black text-white rounded-md text-small inline-block p-3 mt-4"
          type="submit"
        >
          Paint
        </button>
      )}
    </form>
  );

This is pretty effin incredible. It almost removes the need for a developer to worry about syntax, or boiler plate.

I've not had to Google a single "how do I X in Y" during this process. Of course, I haven't done anything complex, but this is still a massive efficiency upgrade.

seventh step

Boom:

let prompt;
  switch (step) {
    case 1:
      prompt = "What is your name?";
      break;
    case 2:
      prompt = "What is your email address?";
      break;
    case 3:
      prompt = "What is your phone number?";
      break;
    case 4:
      prompt = "What is your birthdate?";
      break;
    default:
      prompt = "What should we change now?";
      break;
  }

  if (disabled) {
    return null;
  }

  let inputType;
  switch (step) {
    case 2:
      inputType = "email";
      break;
    case 3:
      inputType = "tel";
      break;
    case 4:
      inputType = "date";
      break;
    default:
      inputType = "text";
      break;
  }

Not bad at all. Again, not clever enough to combine the logic for both switch into a single switch, and it did a weird thing and skipped case 1 for the second block.

Easy enough for us to fix:

let prompt, inputType;
  switch (step) {
    case 1:
      prompt = "What is the date of your travel?";
      inputType = "text";
      break;
    case 2:
      prompt = "How many days are you traveling for?";
      inputType = "number";
      break;
    case 3:
      prompt = "Where are you traveling to?";
      inputType = "text";
      break;
  }

How about getting stuck, and asking for help or expalanations along the way?
ChatGPT has really leveled up the developer experience:

eighth step

Obviously, ChatGPT has limitations, yet adds an incredible amount of efficiency into a development cycle.

Some things it can't do:

ninth step

Cold ChatGPT, cold...

Summary: I'll never code without ChatGPT again, even in its current state. ChatGPT was able to understand my requests, and modify the source code logically in various parts, translating my English requests into code. Pretty god damn incredible. The code is at least at junior level, if not higher, and it takes a split second for output.

I foresee a business model in here to sell fine-tuned AIs as a service to developer focussed companies. This is an enabler service which could easily be justified at $100Ks to multiple millions.


Anyways, it was around here that I hit the breakpoint for diminishing returns.
Now, trying to wrestle ChatGPT into understanding what I was saying was more work than just coding myself.

For example, I asked it for a Node example of consuming its own API, and it gave me an incorrect URL syntax.

It was easier for me to debug the ChatGPT website and figure it out that way.


Deploying

OK - cheat time. The reason I picked this specific starter kit was because it was built using NextJs, an opinionated React framework.

All I had to do to make the website live was:

  1. log into Vercel
  2. Hit "Add New" project and connected my GitHub Project linked below
  3. Add my API key as an environment variable in Vercel:
  4. Route my hostname to the Vercel hostname in AWS Route53 (you can use any)

Here is step 3:

tenth step

And that's it.. Vercel takes care of the rest.

Few minutes later: "Production" environment is ready!

tenth step

Really excited to see where the team takes ChatGPT and whether these limitations can be overcome.


You can view the finished code here.