Generating Videos with Midjourney, Sora & Runway | AI Hackathon Walk-Through
Published: · 3:24
Step-by-step breakdown of a multi-tool AI pipeline that turns raw survey text into animated, data-driven videos. Includes prompt engineering, image creation, video synthesis, and auto-generated bar-chart GIFs.
This hackathon demo shows how a single project used six generative tools to transform survey answers into a polished video sequence—all without traditional editing software.
Key stages
Text Normalization – ChatGPT converts free-form answers to Yes/No/Ambiguous for quantitative analysis.
Image Generation – Midjourney for most assets; ChatGPT-Vision for edge cases such as “upside-down pizza.”
Video Synthesis – Scene-specific strengths:
• Cling – food-action loops (milk-to-cereal).
• Sora – general motion and camera sweeps.
• Google Veo – high-coherence transitions.
• Runway – quick image-to-base-video passes.
• Pika – compositing and fine-tuned overlays.
Animated Charts – ChatGPT auto-renders bar-chart GIFs from supplied values and background images.
Assembly – Combine clips, charts, and captions into the final cut.
Fork the repo to try the workflow: swap prompts, add scenes, or plug in new survey data. Links, prompt templates, and tool settings are in the description.
#AIVideoGeneration #Midjourney #RunwayML #Sora #GoogleVeo #Pika #ChatGPT #AIHackathon #GenerativePipeline #datavisualization
📍 Chapters
0:00 Intro & project goals
0:10 Cleaning survey text with ChatGPT
0:40 Prompting Midjourney for base images
1:05 Handling edge-case imagery with ChatGPT-Vision
1:28 Scene design across Cling, Sora, Veo, Runway, Pika
2:15 Image-to-video: Runway → Pika workflow
2:40 Generating animated bar-chart GIFs in ChatGPT
3:05 Assembly & export
3:18 Takeaways and next steps