What your automation will do…
Refer to OpenAI – Connection

Chat Completions Copy Link
The Chat Completions API endpoint will generate a model response from a list of messages comprising a conversation.
Generate AI Response Copy Link
You send messages to the AI, and it replies in a way that continues the conversation.
How it Works:
You send a list of messages with roles, like:
• System – sets behavior or rules for the assistant.
Example: “You are a helpful assistant.”
• User – your input or question.
Example: “What’s the weather today?”
• AI assistant – Previous messages to give context.
• User: Asks a follow-up question.
The AI replies with the next message in the conversation.
For more detail, see OpenAI API reference Create chat completion
Configuration Table Copy Link
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | Generate AI Response |
Connection* | Select your connection or create one. |
MAP FIELDS
Model*:
Select a the model used to generate the response, like gpt-4o
or o3
.
OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.
Messages*:
list of message objects that represent a conversation between:
- The user (you or your app),
- The assistant (the AI),
- and optionally, a system role that gives the AI instructions.
in Wiresk, each message should have a least 1 element.
+Element +Map ↓
(Add a repeating group of fields or Map it from an array. See Field Mapping).
Element 1: Represents the first message.
Role*:
Who is speaking:
- System (used as pre-prompt to give context)
- User-provided prompt
- Al Assistant (used to give a supposed initial response).
Text*:
what the role says
Image URL:
An image to attach to the message, compatible with GTP-4 Vision model only.
(*) required field
Response sample
[
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of Germany is Berlin.",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
]
Example of a Basic Conversation Copy Link
Set Elements (message) as following:
Element 1:
- Role: “System”
- Text: “You are a helpful assistant.”
Element 2:
- Role: “User”
- Text: “What’s the capital of France?”
Element 3:
- Role: “AI Assistant”
- Text: “The capital of France is Paris.”
Element 4:
- Role: “User”
- Text: “What about Germany?”
This tells the AI:
- System: “Behave like a helpful assistant.”
- User: Asked a question about France.
- Assistant: Replied with “Paris.”
- User: Now asks a follow-up question about Germany.
ChatGPT uses this history to generate a relevant and consistent response.
How to analyze text directly from any incoming data Copy Link
This Method can be used as a smart, flexible step in your Flow. It allows you to analyze or generate text using OpenAI’s GPT models.
You can use it in two main ways:
- Analyze Input from a Trigger
When a Flow starts (e.g., via a webhook, form, or app event), you can extract text from the input and pass it into the Chat Completion step. - Analyze Data from a Previous Method
You can map the output of a previous step (e.g., a support ticket, transcription, or product description) and use the AI to interpret, summarize, or act on that text.
Mapping Data into the Complete chat Method Copy Link
You can insert dynamic data from earlier steps (Triggers or Methods) using the built-in mapping tool:
- In field
messages.text
of the user role - Type
@
directly into the text box (lightning bold function) - A popup menu will appear showing all available fields from:
- The initial Trigger
- Previous Method responses
- Select the field you want to insert, and it will be dynamically replaced when the Flow runs.
Example Copy Link
In the user message text field of the Complete Chat Method, enter:
Summarize the following customer issue: @form.description
Here, @form.description
refers to a field captured from the form Trigger.
After executing the Complete Chat Method in your Wiresk Flow, the assistant’s response is returned in the "content"
field. This field contains the AI-generated message based on the input prompt and conversation context.
Response sample
{
"role": "assistant",
"content": "Summary generated by ChatGPT"
}
Next Step: Mapping the "content"
Field Copy Link
You can now use this "content"
field as input for subsequent steps in your Flow. To do this:
- In any following Method or action step, click into the relevant field where you want to insert dynamic content.
- Type
@
in the manual input box (lightning bold function), to activate the data mapping menu. - Locate and select the
"content"
field from the output of the Complete Chat step. - The selected field will be inserted as a dynamic variable and evaluated during Flow execution.

Moderation Copy Link
Identify potentially harmful content in text and images.
Content Moderation Copy Link
Content filtering in user-generated platforms (e.g., forums, chats, reviews)
Pre-screening messages before they’re published or processed
Moderation workflows in customer service or community management
Risk management for brands to detect and block offensive, violent, or inappropriate content
For more detail, see OpenAI docs on Moderation.
Configuration Table:
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | Content Moderation |
Connection* | Select your connection or create one. |
MAP FIELDS
Input Format*:
Text
A single block of plain text input.
Use Case: Use this when you are analyzing one piece of text at a time, such as a message, comment, or user input.
Array of Texts
A list (array) of multiple text strings submitted at once.
Use Case: Ideal for batch processing or when you need to moderate several messages in one API call.
Note: The moderation model will return a response for each item in the array, making it efficient for bulk checks.
Image URL
A URL that points to an image hosted online.
Get classification information for images.
Model:
omni-moderation-latest
: This model and all snapshots support more categorization options and multi-modal inputs.
(*) required field
Response sample for text moderation:
{
"id": "modr-1002",
"model": "omni-moderation-latest",
"results": [
{
"flagged": false,
"categories": {
"harassment": false,
"harassment/threatening": false,
"sexual": false,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"self-harm": false,
"sexual/minors": false,
"violence": false,
"violence/graphic": false
},
"category_scores": {
"harassment": 0.000546674185053768,
"harassment/threatening": 0.000006922183404468203,
"sexual": 0.000008220189478350845,
"hate": 0.000004757645425657629,
"hate/threatening": 4.289333109916098e-7,
"illicit": 0.000009610241549947397,
"illicit/violent": 0.0000024682904407607285,
"self-harm/intent": 0.0000036478537675800286,
"self-harm/instructions": 0.00021296615605463458,
"self-harm": 0.000009838119439583208,
"sexual/minors": 0.000001670142184809518,
"violence": 0.00000861465062380632,
"violence/graphic": 0.0000015689549727726036
},
"category_applied_input_types": {
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"sexual": [
"text"
],
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm/intent": [
"text"
],
"self-harm/instructions": [
"text"
],
"self-harm": [
"text"
],
"sexual/minors": [
"text"
],
"violence": [
"text"
],
"violence/graphic": [
"text"
]
}
}
]
}

Audio Copy Link
Learn how to turn audio into text or text into audio.
Text to Speech Copy Link
For more detail, see OpenAI API reference Create Speech.
Configuration Table:
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | Text to Speech |
Connection* | Select your connection or create one. |
MAP FIELDS
Input*:
The text to generate audio for.
Max: 4096 characters.
Voice*:
The voice to use when generating the audio.
Supported voices are:
alloy
, ash
, ballad
, coral
, echo
, fable
, onyx
, nova
, sage
, shimmer
, and verse
.
Previews of the voices are available in the Text to speech guide.
Response Format*:
The format to audio in.
Supported formats:
- MP3: The default response format for general use cases.
- Opus: For internet streaming and communication, low latency.
- AAC: For digital audio compression, preferred by YouTube, Android, iOS.
- FLAC: For lossless audio compression, favored by audio enthusiasts for archiving.
- WAV: Uncompressed WAV audio, suitable for low-latency applications to avoid decoding overhead.
- PCM: Similar to WAV but contains the raw samples in 24kHz (16-bit signed, low-endian), without the header.
Model:
One of the available TTS models
(*) required field
Response sample
{
"name": "speech.mp3",
"ext": "mp3",
"content": "//PkxABgxDnIV1rAABu72SyHh..." // truncated base64 string
}
How to Decode and Listen Copy Link
To convert the base64 output into an actual audio file:
- Copy the full
"content"
string. - Go to a free tool like GitHub’s base64 decoder.
- Paste the string and download the MP3.
- You can then play or distribute the audio.
Transcription Audio Copy Link
What It Does:
• Accepts a base64-encoded audio file
• Detects the language automatically
• Transcribes the spoken content into text in the original language
For more detail, see OpenAI API reference Create Transcription.
Configuration Table:
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | Transcription Audio |
Connection* | Select your connection or create one. |
MAP FIELDS
Audio File (Base64 encoded)*:
Provide the Base64 encoded content.
You can map this from a previous step.
For more details, refer to Base64 Encoding documentation.
Model*:
Select the OpenAI model to use (pre-trained AI system).
File Type*:
The format of the audio file.
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
Prompt:
An optional text to guide the model’s style or continue a previous audio segment.
⚠️ The prompt should match the audio language.
Temperature:
The sampling temperature, between 0 and 1.
Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
(*) required field
Response sample
{
"text": "Hello, world!"
}
Translation Audio Copy Link
What It Does:
• Accepts a base64-encoded audio file
• Automatically detects the spoken language
• Translates the speech into English
• Returns a clean English transcription
For more detail, see OpenAI API reference Create Translation.
Configuration Table:
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | Translation Audio |
Connection* | Select your connection or create one. |
MAP FIELDS
Audio File (Base64 encoded)*:
Provide the Base64 encoded content.
You can map this from a previous step.
For more details, refer to Base64 Encoding documentation.
Model*:
Select the OpenAI model to use (pre-trained AI system).
Only whisper-1
(which is powered by our open source Whisper V2 model) is currently available.
File Type*:
The format of the audio file.
flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
Prompt:
An optional text to guide the model’s style or continue a previous audio segment.
⚠️ The prompt should be in English.
Temperature:
The sampling temperature, between 0 and 1.
Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
(*) required field
Response sample
{
"text": "Hello, world!"
}

Image Copy Link
Given a prompt and/or an input image, the model will generate a new image. Related guide: Image generation
Generate Images Copy Link
For more detail, see OpenAI API reference Create Image.
Configuration Table:
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | Generate Images |
Connection* | Select your connection or create one. |
MAP FIELDS
Prompt*:
A text description of the desired image(s).
- Max 1000 characters for Model
dall-e-2
- Max4000 characters for Model
dall-e-3
Model:
The model to use for image generation.
dall-e-2
or dall-e-3
Number of Images:
The number of images to generate. Must be between 1 and 10.
For dall-e-3
, only 1
is supported.
Default: 1
Quality:
The quality of the image that will be generated.
auto
(default value) will automatically select the best quality for the given model.hd
andstandard
are supported fordall-e-3
.standard
is the only option fordall-e-2
.
Response Format:
The format in which generated images with dall-e-2
and dall-e-3
are returned.
url
or b64_json
.
⚠️ URLs are only valid for 60 minutes after the image has been generated.
Size:
The size of the generated images.
256x256
,512x512
, or1024x1024
fordall-e-2
1024x1024
,1792x1024
, or1024x1792
fordall-e-3
.
Style:
The style of the generated images.
This parameter is only supported for dall-e-3
.
Must be one of:
- Vivid causes the model to lean towards generating hyper-real and dramatic images.
- Natural causes the model to produce more natural, less hyper-real looking images.
User ID:
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
(*) required field
Response sample
[
{
"url": "https://image.openai.com/image/abc123xyz.png"
}
]

AI Router Copy Link
AI Router (in development) Copy Link
⚠️ Since this Method is in development:
• Response structure might change
• Features like validation, formatting rules, or error handling might still be evolving
• Ideal for internal testing or advanced builders familiar with OpenAI prompts
Configuration Table:
Input Options (Field Mapping):
- Input: Allows dynamic inputs, e.g., from a Trigger or from Step responses. Input tab>uncheck “Show recommended” to see all fields).
- Default Value: Select value from a defined list or specify a fixed attribute.
- Manual input: Set a custom value by using the Lightning Bold feature.
Name* | AI Router |
Connection* | Select your connection or create one. |
MAP FIELDS
Model*:
Select the OpenAI model (like gpt-4
, gpt-3.5-turbo
) that processes the routing decision.
Prompt Messages*:
This field accepts system/user messages in OpenAI Chat format to guide the AI’s logic. You can feed in data, questions, or instructions that the AI will analyze.
Context Object:
This allows you to pass additional structured input (like JSON objects) to be used in the prompt for more complex routing.
+ Add Field (Manual Field Builder)
You can manually define extra data fields. These can be of the following types:
- String
- Integer
- Float
- Boolean
- Array
- Object
This makes the Router very flexible, you can feed in structured business data, parsed webhook payloads, or user input to fine-tune the routing decision.
(*) required field
Test run or automate your Flow Copy Link
After setting up your Flow, you can choose to:
- “Run once”, your Flow will run only a single time. You can use this function to test your Flow.
- “Run Scheduler” will automate your Flows with the recurrence rule you previously defined.
For more details, refer to How to run a Flow tutorial in our Help Center.
If you are using a Webhook Trigger, the Flow will initiate automatically when a webhook is received from your connected apps. This means that the Flow is automated without a scheduler and will run until you deactivate the Flow manually. Refer to Webhook documentation in our Help Center.
If you need an integration that you cannot find in Wiresk, you can make a request to our team, and we will try our best to satisfy your needs.