It seemed a bit cliche to have OpenAI’s GPT-3 language model write the intro to this post so I’ve decided to go old school and write it myself. Know, however, that it would have been just as quick to auto-generate. Which is the beautiful thing about the autoregressive language model that is making waves in the tech world. It’s now sprung a plethora of new tools that embrace its multifaceted capabilities.

… Alright fine, here’s a nice little intro written with the davinci text model:

GPT-3 is the latest generation of AI technology from OpenAI. It has been designed to be more efficient and accurate than previous versions, while also being more user-friendly. In this blog post, we will take a look at some of the ways in which GPT-3 can be used to improve your productivity.

I tweaked the input parameters to have it explain how it works (how meta):

GPT-3 is a machine learning platform that uses a deep neural network to predict the next word in a sentence. It is trained on a large corpus of text, such as books, articles, and websites. When you type in a sentence, GPT-3 looks at the words around it and predicts what the next word should be.

In addition to predicting the next word, GPT-3 can also provide suggestions for completing a task. For example, if you start typing “I need to book a flight from” GPT-3 will suggest various airports and airlines. This can be useful when you’re not sure how to complete a task or want to explore different options.

One of the benefits of GPT-3 is that it can learn from your specific domain or industry. For example, if you’re a doctor who frequently uses medical terminology, GPT-3 will learn this vocabulary and provide more accurate predictions. This customization makes GPT-3 more efficient and accurate than general purpose machine learning platforms like Google Translate.

Wild.

Chances are that a conversation has already taken place in your company regarding the applications of GPT-3. To suggest they are myriad would be an understatement.

I wanted to create this guide to help demonstrate how I used Airtable and the OpenAI API to create actor biographies at scale. You could use this same process with any number of pre-trained models to auto-generate code, write the introduction to works of fiction, yield mock interviews, write fictional cooking recipes, etc.

By the end of this guide, you will be able to do this…

Gif showing Airtable pulling actor data from OpenAI

Step 1 – Register with OpenAI

Go to https://openai.com/api/ and sign up. They will ask you what your intent for using the service is. Be sure not to click the option that suggests you want to generate masses of automated content at scale for nefarious purposes.

OpenAI provides a generous $18 of kick-off credits for their Playground and API. It didn’t take me long to sign up for more but those credits are more than enough to get familiar with it.

Step 2 – Register with Airtable

SIGN UP via this referral link.

You might ask yourself why we’re using Airtable or what even Airtable is? Airtable is a native connected cloud database platform. It’s one of the best tools in the no-coders arsenal for connecting to web apps and storing data. It’s flexible so you can send data to and from the tool with great ease.

We’re going to use it because it’s a great no-code method to generate API calls to any public endpoint in conjunction with the Data Fetcher Airtable extension (which we will also install). Data Fetcher is magic and paired with Airtable helps make writing, saving, and scheduling API calls a breeze. There is a paid version (which I highly recommend) but you shouldn’t need it for this demo.

Step 3 – Play in the OpenAI Playground!

Before moving on, have a play in the OpenAI Playground to get familiar with what you can do with the different models. Input parameters are key in getting back the text you want for your project. There are numerous input parameters that control different aspects of the model:

Parameter Range Impact
Temperature 0 to 1 Controls randomness (entropy) so for more robotic responses set to 0.
Maximum length 0 to 4,000 The number of tokens to use per request. Tokens are the currency of OpenAI. One token is approximately 4 characters.
Stop sequences 0 to 4 For example – if you ask “ordered list some science fiction books” and have a stop sequence of “5.” then the output will stop after 4.
Top P 0 to 1 Set to 1 and this would use all the tokens (words) in the vocabulary. 0.2 would mean using only 20% of the most common tokens.
Frequency penalty 0 to 2 Decreases the likelihood to repeat the same line verbatim.
Presence penalty 0 to 2 Increases the likelihood to talk about new topics.

When you think you’ve got the output you want in the playground you can click the “view code” button in the top right and copy the code that we will use in the API request in Step 6 (below).

There is some great documentation on the OpenAI site to help you better understand the model.

Asking OpenAI questions about GPT-3

Step 4 – Generate an OpenAI API Key

Generate OpenAI API Key

Visit this link, https://beta.openai.com/account/api-keys, log in and create a new account key. Copy that bearer key as we will be using it in Airtable shortly.

Step 5 – Create Your Table in Airtable and Install Data Fetcher

Create a new table in Airtable and click “Extensions” on the right-hand side. Search the Airtable marketplace for Data Fetcher and install it. The interface will look like this:

Data Fetcher Interface in Airtable

In your Airtable table, you will want to have two columns only. The first is called “Actors” and the second is called “AI Output”. Populate the Actors column with a list of your favorite actors or just write in 5-10 examples. The AI Output column can be empty as this is where we will be outputting our response.

Step 6 – Create Your API Request in Data Fetcher

Now you will create your API request as follows:

  1. Click “Create request”
  2. Select “Custom” from the “Application” dropdown
  3. Change the “Method” to POST
  4. Add the URL. We’ll be using the completions API so it is as follows: “https://api.openai.com/v1/completions”
  5. Click “Authorization”
  6. Select “Bearer Token” from the “Type” dropdown
  7. Copy and paste your token from your OpenAI settings (see Step 4)
  8. Click “Body”
  9. Now you will enter your JSON which will form the request that defines your response. Here is the JSON I have used for this example (you could also copy the example JSON for this type of request from the OpenAI docs here):

{
"model": "text-davinci-002",
"top_p": 1,
"prompt": "Write me a short biography about ***Table 3*Actors***",
"max_tokens": 60,
"temperature": 0.5
}

The “text davinci-002” model is the most advanced language model in the GPT-3 series. ***Table 3*Actors*** above here is a reference to my table reference and you will need to change this to reference your own Airtable field.

This is where Data Fetcher is incredibly useful for forming API calls. We can set table references or variables as values within the request. In the example below I have amended the “prompt” value within the body of the request to say, “Write me a short biography about ” and then clicked the + on the right-hand side. From here I have selected the table reference which is the “Actors” column.

Insert Reference to Table or Variable in Data Fetcher

Remember that you can amend other input values (such as “top_p”) in the JSON request to match the type of response you would like to get. You won’t know what to amend unless you test this in the Playground first (see Step 3).

Step 7 – Save and Run the Call

Next, you save and run the call and you will get the Data Fetcher response field mapping that looks something like this:

Data Fetcher in Airtable Response Field Mapping

These are all the field responses from the OpenAI API! You likely don’t need them all. The only field containing GPT-3 text response is the “Choices text” so you can filter all and then only check that one.

You can select where you want to map this field by selecting a new or existing field from the choices below. Select “Existing field” and then choose the column we created earlier, “AI Output”.

Selecting and Assigning The Output Field in Data Fetcher

Et voila. In case you missed anything here is a GIF running through the entire process in Data Fetcher:

Configuring Data Fetcher in Airtable for a POST API Request

While today we’ve just looked at a simple request to provide a biography for famous people, you can do so much more with these foundational skills. You can add additional columns and give further context to your API requests by stringing together table references and expanding the “prompt” of your requests. Most importantly, you can generate huge amounts of useful copy in a variety of incredibly smart ways that might otherwise take you hours/days to create.

There’s no denying that language models such as this will completely revolutionize industries (and already have). Some examples of tools you may have already heard of:

I won’t comment on the existential impact of this incredible technology but I do hope this guide gives you an introduction to being a part of it. Feel free to comment below with any feedback!