💡 Experimental action: Ask ChatGPT

Here we go. A basic action for ChatGPT. Should be good for experimentation. Coming soon (in v5.6.1)

Questions, thoughts, suggestions?

(Click to zoom in)

The "ChatGPT" connector:

2 Likes

This works too.

One more case - language detection:

1 Like

Funny, I had scheduled a LAB session with our developers to do exactly this, just this afternoon.

So Indeed! IT could be very nice to have!
We will try to see if we can use GPT to generate SQL queries based on text input from customers to pull live data from our ERP (but using EM to be a broker in between and have some security measures in place)

How I can have access to beta ?

This is a very interesting implementation - I like that you can drop in parameters.

Looking forward to playing with this!

We don't do public beta testing for this version (5.6.1) because it will be released in just 2-3 weeks. We will announce the new release when it happens. Stay tuned!

@dgudkov Very interesting and fun addition. Prompts to the OpenAI API can be quite detailed and can accomplish a wide variety of tasks. ChatGPT runs Python in the backend, so familiarizing yourself with Python capabilities will be very helpful. I would also suggest that you "ask" ChatGPT for specific examples of how to use the API for specific tasks. A couple of examples:

1. Statistical Analysis

Suppose you have a dataset and you want to calculate the mean, median, and standard deviation of a particular column.

#Web Request
POST https://api.openai.com/v1/engines/davinci-codex/completions

#Prompt
{
"prompt": "import pandas as pd\n\n# Sample dataset\ndata = pd.DataFrame({'values': [10, 20, 30, 40, 50]})\n\n# Calculate mean, median, and standard deviation\nmean_value = data['values'].mean()\nmedian_value = data['values'].median()\nstd_dev = data['values'].std()\n\n(mean_value, median_value, std_dev)",
"max_tokens": 100
}

2. Remove rows where FICO score is outside of the normal range

import openai
import pandas as pd

Sample data as a CSV-formatted string

data_string = """Name,FICO_Score
Alice,780
Bob,850
Charlie,300
David,250
Eva,900
Frank,620
"""

Create the prompt with instructions for data analysis

prompt = f"""
import pandas as pd

Load the dataset

data = pd.read_csv(pd.compat.StringIO('''{data_string}'''))

Define the normal FICO score range

min_fico_score = 300
max_fico_score = 850

Filter out rows where FICO score is outside the normal range

cleaned_data = data[(data['FICO_Score'] >= min_fico_score) & (data['FICO_Score'] <= max_fico_score)]

Display the cleaned data

cleaned_data
"""

Set your OpenAI API key

openai.api_key = 'your-api-key'

Send the request to OpenAI API

response = openai.Completion.create(
engine="davinci-codex",
prompt=prompt,
max_tokens=150
)

Extract the generated code or result from the response

result = response.choices[0].text.strip()
print(result)

In these examples, the pandas data frame can be a parameter that holds the contents of a CSV file if it is below the token limit. In statistical analysis example, you could iterate columns to a web request action with the desired prompt.

Thanks Dmitry!Preformatted text

1 Like

Also, learning how to effectively prompt is key. I recently started an AI user group at our company and shared these resources to help learn about prompting. A small list to be sure, but it is a start.

Prompt Engineering Books at Amazon - https://www.amazon.com/s?k=the+prompt+engineering+book
Learn Prompting - [ai prompts: Online Courses, Training and Tutorials on LinkedIn Learning]
Useful links for getting started with Prompt Engineering: r/PromptEngineering - Reddit [Reddit - Dive into anything]
11 Tips to Take Your ChatGPT Prompts to the Next Level | WIRED [11 Tips to Take Your ChatGPT Prompts to the Next Level | WIRED]
How To Write ChatGPT Prompts (150+ Awesome Prompts Inside) - Upwork [https://www.upwork.com/resources/how-to-write-chatgpt-prompts]
Prompt Engineering Guide - Prompt Engineering Guide | Prompt Engineering Guide (promptingguide.ai)
Learn Prompting - Learn Prompting: Your Guide to Communicating with AI
Prompt Engineering Institute - The Prompt Engineering Institute

1 Like

Hi @mac and everyone else,

version 5.6.1 with the "Ask ChatGPT" action has been released. You can try it here: EasyMorph | Download free ETL tool.

1 Like

@dgudkov

Hi Dmitry,

Action works as expected. In the current version, is it possible to send inputs in properly formatted code to the API? Something like this as an example:

{
"model": "gpt-4.0",
"prompt": "Calculate the minimum, maximum, and average of the following array of numbers: [12, 34, 7, 89, 45, 22, 67].",
"temperature": 0.7,
"max_tokens": 150
}

Hi Casey,

In the "Chat GPT" connector, it's already possible to select a model. If you need a setting for temperature and max tokens, we can add that too.

In any case, you can use the generic "Web location" connector and the "Web request" action to send JSON requests like in your post directly to the ChatGPT API.

Do you like anything specific about how ChatGPT helps you with your questions?

Hi EasyMorph Team,

a few questions. We are using the action to classify 100k purchasing transactions based on a pre-defined category tree. Transaction by transaction for now. In order to do so, each prompt contains the transaction AND the full category tree, which is always the same. Here are my questions:

  1. Is it possible to use the connector to send a json or csv attached to the prompt? It would allow to bundle transactions.
  2. Is it possible to reduce the tokens of the prompt by sending the category tree only once and each transaction in a separate prompt.
  3. Is the system input good for anything in this particular use case (e.g. could we add the cateogry tree to the system input, instead of the prompt, to improve the accuracy of the output)

Thanks in advance!
Albert

Hi Albert,

The "Ask ChatGPT" action allows sending any system input or prompt you need - it's a thin wrapper around the API. You can insert JSON or CSV data anywhere using parameters - the prompt and the input support that.

Is it better to insert the category tree in the input or in the prompt - I don't know, I didn't experiment that much with ChatGPT.

Unfortunately, there is currently no option that allows this, and it seems impractical to implement this option. Under the hood, each request to the OpenAI endpoint is created individually from scratch, nothing is reused (the chat-history experience of ChatGPT is a facade).

The required data should be included in the request, either as a text representation (like raw JSON) or as multimodal input, such as images or sounds (which is not supported by the action currently).

This is not possible. OpenAI's pricing structure requires payment for all request contents. In some situations, they might cache tokens from subsequent prompts, which you would notice in the billing details under 'cached token count'. However, this caching decision is left up to OpenAI.

There's no one-fits-all answer to this, and I recommend experimenting with it. From personal experience, I wouldn't expect a significant difference.