Using Chat GPT for translation

Hello,

Is it smart to use Chat GPT for translations? Row by Row? How then?

regards
Fredrik Svensson

It is surely possible, but it can be slow. If the translated texts are not too long, it can be possible to put them into an array and ask ChatGPT to translate each value in the array.

I would also look at specialized translation services - they will be faster.

How? Any example?

Here is a basic example:

To learn prompt engineering, see OpenAI resources: https://platform.openai.com/docs/guides/prompt-engineering

So I want to add a specific row and column into the message.

Modify the prompt as you need it.

Ok. Could you point me in the right direction here? Using parameterd to fill the data that I need to send?

Yes, see the example above I posted. Parameters can be inserted into prompts in curly braces. In that example the parameter is {Text to translate}. You can also see the value of this parameter returned by the "Parameter table" action above.

If something is not clear in the example, let me know.

You can also learn more about parameters in EasyMorph in this tutorial: EasyMorph | Parameters

I want to be able to send X arbitrary requests to a Chat GPT (LLM) and then wait for responses, extract input from a specific column and row, use it as the basis for translation, and then save the translated content into another column. Essentially, some kind of pooling mechanism where you can control how many requests are running simultaneously. I can write a service in C#.Net to handle this, but it would be nice to have everything integrated into EasyMorph.

To automate scenarios like that you need to learn how to do loops in EasyMorph. Loops are essential in programming. Without them, you can't build anything serious. Since EasyMorph is effectively functional programming, not imperative (like C#), we call loops "iterations".

Tutorial on iterations: EasyMorph | Loops and iterations

PS. Judging by your questions, I have a feeling that you didn't go through the "Advanced" part of the tutorial yet. I highly recommend doing it. Otherwise, it might be hard for you to understand my answers as I frequently refer to mechanisms explained in that part of the tutorial.

Is there any chance to get an advance gpt option where we can set our own local LLM address and have the iteration integrated?

@Mihai_Moisa I don't follow LLMs closely, but I read somewhere that there is a more or less settled API for LLMs. Do you happen to have a link or two to read more about this?

Or, if you have time, we can jump on a call and discuss it.

The easiest way to test a local AI is to install LMSTUDIO. Most of the standalone programs are compatible with open API