Have a full control over matching your template slides with real values

Story API can help you automatically generate content for slides, find the best slides in templates, and much more.

But if you know exactly what text to show and what visual blocks to use and just need to adjust the text to your visuals. In this case, you can use the following approach.

Create a template and blank story

First, you need to create your template with visuals that you want to fill. Also, you need to create an empty story, that will be using your template.

Create slides automatically using API

After that use PATCH video API to add your slides. Each slide should have the following structure:

  "id": 1125874895785,
  "speech": "Sung Kim.·Follow. Published in Dev Genius. ·. 13 min read·. Mar 9.478. 20. Nowadays, it seems that all applications have either Chat or GPT in their names. Newly announced applications include ChatPDF (for chatting with PDF files), ArvixGPT (for chatting with arXiv papers scholarly articles), and GPT for Sheets and Docs (which brings ChatGPT to Google Sheetsand Docs), among many others, where new apps are announced on a daily basis. Have you ever wondered how these applications are able to chat with documents that are over 100 or even 1,000 pages long, whereas if you try to do the same thing with ChatGPT, you will get an error message like this? It is no different with OpenAI GPT-3 APIs, where you will get an error message like this. The short answer is that they convert documents that are over 100 or even 1,000 pages long into a numeric representation of data and related context (vector embedding) and store them in a vector search engine. When a user chats with the document (i.e., asks questions), the system searches and returns similar text to a question (i.e., chat) using an algorithm called Approximate Nearest Neighbor search (ANN). It looks something like this. https: www.elastic.co what-is vector-search The program then includes the returned text that is similar to a question (i.e., chat) in a prompt and asks the same question again to the OpenAI GPT-3 API. This returns a nice response that you are used to with ChatGPT. The benefits of this approach are that the prompt is much smaller, not 1,000 pages of documents, and the user gets the answer they are looking for. On a side note, if you are worried about something like ChatGPT providing incorrect answers to a question, then you can point it to your organizations knowledge base and provide answers from there. The accuracy of the answers will be as good as your knowledge base.",
  "avatar": {
    "code": "chad.business3",
    "name": "Business",
    "gender": "male",
    "gif": "https://elai-media.s3.eu-west-2.amazonaws.com/avatars/chad_business_3.gif",
    "thumbnail": "https://elai-media.s3.eu-west-2.amazonaws.com/avatars/chad_business_3.jpg",
    "canvas": "https://elai-media.s3.eu-west-2.amazonaws.com/avatars/chad_business_3.png",
    "tilt": {
      "left": 0.03
  "language": "English",
  "voiceType": "text",
  "animation": null,
  "transition": "wiperight",
  "status": "story",
  "story": {}

You don't need canvas param as Elai will automatically create it later. Regarding all other params (avatar, language, etc.) you can check here. You can keep them the same for all slides, but what you need to customize is a story param as follow:

"story": {
    "templateSlideId": 152717506324, //this is the reference to the slide in your template. You can find slide ID in URL when you click on the slide in video editor or in template JSON.
    "header": "", // text for header (main text on the slide with bigger font size)
    "subHeader": "", // text for subheader (additional text on the slide with smaller font size)
    "list": ["item1", "item2"] // if your slide contains list you can add values here
    "images": [{ // images array (if you want to add images dynamically)
      "src": "https://miro.medium.com/v2/xY0iKd8_iIYMYLhx2oA5kg.png",

Params in the story object are optional and you can add only those params that you use in your slide.

Generate visuals

As soon as you add all your slides with the proper story setting, you can just generate visuals using this API and your video will be ready for render.