# Grok 4 + Genspark AI Super Agents is INSANE!

## Метаданные

- **Канал:** Julian Goldie SEO
- **YouTube:** https://www.youtube.com/watch?v=Ut64DWyaRBw
- **Дата:** 17.07.2025
- **Длительность:** 14:23
- **Просмотры:** 8,490

## Описание

Want to get more customers, make more profit & save 100s of hours with AI? https://go.juliangoldie.com/ai-profit-boardroom

Free AI Community here 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553

🚀 Get a FREE SEO strategy Session + Discount Now: https://go.juliangoldie.com/strategy-session

🤯  Want more money, traffic and sales from SEO? Join the SEO Elite Circle👇
https://go.juliangoldie.com/register

🤖 Need AI Automation Services? Book an AI Discovery Session Here: https://juliangoldieaiautomation.com/

Click below for FREE access to  ✅ 50 FREE AI SEO TOOLS 🔥 200+ AI SEO Prompts! 📈 FREE AI SEO COMMUNITY with 2,000 SEOs ! 🚀 Free AI SEO Course 🏆 Plus TODAY's Video NOTES...
https://go.juliangoldie.com/chat-gpt-prompts

- Want a Custom GPT built? Order here: https://kwnyzkju.manus.space/
- Join our FREE AI SEO Accelerator here: https://www.facebook.com/groups/aiseomastermind
- Need consulting? Book a call with us here: https://link.juliangoldie.com/widget/bookings/seo-gameplanesov12

## Содержание

### [0:00](https://www.youtube.com/watch?v=Ut64DWyaRBw) Intro

Today I'm going to show you something that just blew my mind. Grock 4 is now available inside GenSpark and I tested it against the main Grock website. The results absolutely crazy. Some tests made Grock 4 inside Genspark look like a total winner. Others made it crash and burn. I'm talking about real games, real code, real results. No BS, just raw testing that will change how you think about AI platforms forever. Hey, if we haven't met already, I'm the digital avatar of Julian Goldie, CEO of SEO agency Goldie Agency. Whilst he's helping clients get more leads and customers, I'm here to help you get the latest AI updates. Julian Goldie reads every comment, so make sure you comment below. So, here's the big news. Gro 4 is

### [0:40](https://www.youtube.com/watch?v=Ut64DWyaRBw&t=40s) Testing

now available inside Genenspark. This means you can access the same powerful AI model through two different platforms. But here's the crazy part. They don't perform the same way at all. I ran seven brutal tests, seven real world challenges that would make most AI systems cry. And the results were so shocking I had to run them twice just to make sure I wasn't seeing things wrong. Let me show you exactly what I tested and what happened. Test one, HTML run a game. Here's the exact prompt I used. Create a playable pixel art runner game where a character jumps over obstacles, collects coins for points, gets faster over time, and tracks the high score all in a single HTML file with inline CSS and JavaScript. When I ran this test on Gro 4 inside Genspark, the game generated, and it was working. I could see the character on screen. There were obstacles to jump over and coins to collect. The visual design looked decent with a pixelated style that matched what I asked for. But when I actually started playing the game, that's when the problem showed up. The game mechanics weren't working properly at all. The character would continue running even when it clearly hit a block or obstacle. It was like the collision detection was completely broken. Even worse, when I tried to make the character jump, it wouldn't jump high enough to clear the obstacles. The jump height was way too low. So, basically, you had a game that looked okay, but was impossible to actually play because the core mechanics were broken. Now, when I tried the same

### [2:02](https://www.youtube.com/watch?v=Ut64DWyaRBw&t=122s) HTML Puzzle

exact prompt on the main Grock website, it wasn't working at first. I got an error message and the game wouldn't load at all. Complete failure right out of the gate. But here's where it gets interesting. When I told it the game wasn't working, something amazing happened. It actually listened to my feedback and fixed itself. The game became completely playable. The UI looked good with clean pixel art graphics. All the mechanics worked perfectly. The collision detection was spoton. The jump height was just right. The character would stop when it hit obstacles. The coins were collectible. The score system worked. The speed increased over time just like I asked. It was everything I wanted and more. Winner: Mangro website. The feedback loop made all the difference. Test two. HTML puzzle game. Exact prompt. Build a playable sliding puzzle game with a 4x4 grid where players click tiles to move them into the empty space, scramble the puzzle with a button, show a move counter, and display a win message when solved using pixel art style in one HTML file. Both platforms delivered solid results on this one. Grock 4 inside Genspark created a working sliding puzzle with a clean 4x4 grid layout. The tiles were clearly numbered and had a nice pixelated design. When you clicked on a tile next to the empty space, it would slide smoothly into position. The scramble button worked perfectly to mix up the puzzle. The move counter accurately tracked every move you made. When you solve the puzzle, it displayed a clear win message. The whole interface was intuitive and responsive. The main Grock website produced very similar results. The game functioned exactly as expected with smooth tile movements, accurate move counting, and proper wind detection. The design was clean, and the pixel art style was consistent throughout. Both versions were equally playable and enjoyable. Winner tie. Both platforms delivered solid results with no major differences. Test three, data

### [3:55](https://www.youtube.com/watch?v=Ut64DWyaRBw&t=235s) Data Visualization

visualization. Exact prompt. Create an interactive bar chart that shows the top 10 most popular programming languages with animated transitions when data changes, hover effects showing percentages, and buttons to sort by popularity or alphabetically, all in a single HTML file. This test showed the biggest difference between the two platforms. Gro 4 inside Genspark absolutely crushed this challenge. The output was incredible. It created a beautiful modern-l looking bar chart with smooth animated transitions. The bars would grow and shrink smoothly when you change the data. The color scheme was professional with gradients and modern styling. When you hovered over each bar, it would highlight and show you the exact percentage in a clean tool tip. The sort buttons were styled beautifully and worked instantly. When you clicked sort by popularity, the bars would rearrange themselves with smooth animations. Same thing when you clicked sort alphabetically. The whole thing looked like something you'd see on a high-end data visualization website. It was fast, responsive, and aesthetically pleasing. The main Grock website produced something completely different. It took forever to load, and when it finally appeared, the output was disappointing. The design looked outdated with a boring landscape layout that felt like it was designed in 2005. The colors were dull and uninspiring. The bars were plain rectangles with no styling or visual appeal. The hover effects were basic and clunky. The sort buttons looked like default HTML buttons with no custom styling. The animations were jerky and slow. It technically worked, but it looked unprofessional and outdated. This wasn't even close to what I was expecting. Winner: Grock 4 inside Gen Spark. Not even close. Test four.

### [5:32](https://www.youtube.com/watch?v=Ut64DWyaRBw&t=332s) Algorithm Implementation

Algorithm implementation. Exact prompt. Implement a path finding algorithm that finds the shortest route in a 20x 20 grid maze from start to end point, visualizes the path exploration in real time, and allows users to draw walls by clicking cells, complete with a working demo in HTML. This test revealed a massive weakness in Grock 4 inside Genspark. The result was a complete failure. When I opened the HTML file, all I saw was a white background with absolutely nothing on it. There was no grid visible anywhere. I couldn't see any maze structure. There was no start point or endpoint marked. I couldn't draw walls by clicking because there were no cells to click on. The pathf finding algorithm wasn't running because there was nothing to find a path through. It was like asking for a car and getting an empty parking lot. Completely broken and unusable. The main Grock website delivered exactly what I asked for. It created a perfect 20x 20 grid with clearly defined cells. The start point was marked in green and the end red. When you clicked on any cell, it would turn black to create a wall. The pathfinding algorithm would then recalculate the shortest route and show it in real time with a blue line. You could watch the algorithm explore different paths as it searched for the optimal route. The visualization was smooth and educational. You could create complex mazes by drawing walls and watch the algorithm adapt. It was everything I requested and more. Winner: Main Grock website. Grock 4 inside Genspark completely failed this one. Test five.

### [6:55](https://www.youtube.com/watch?v=Ut64DWyaRBw&t=415s) HTML Memory Game

HTML memory game. Exact prompt. Create a playable memory card matching game with 16 cards, eight pairs using emoji as images, flip animations, a timer showing elapse seconds, move counter, and a restart button. All in pixel art style in one HTML file. Gro 4 inside Genspark produced something that looked promising but didn't work. The interface was nicely designed with 16 cards arranged in a 4x4 grid. The pixel art styling was consistent and appealing. The timer was visible and counting seconds. The move counter was displayed clearly. The restart button was prominently placed. But when I actually tried to play the game, that's when the problems became obvious. When you clicked on a card to flip it, instead of showing an emoji, it would show a white or completely empty card. The card flipping animation worked fine, but there was nothing to see on the flipped cards. You couldn't match pairs because there were no images to match. The core game logic was completely broken. Even though the UI looked good, the main Grock website produced a game that actually worked. When you clicked on a card, it would flip over smoothly and reveal a colorful emoji. Each emoji appeared exactly twice so you could find matching pairs. The timer counted elapsed seconds accurately. The move counter tracked every flip you made. When you found a matching pair, the cards would stay flipped. When you mismatched, they would flip back after a brief delay. The restart button would shuffle the cards and reset the timer and move counter. The whole game was playable and enjoyable from start to finish. Winner: Main Grock website. The game actually worked. Test six, HTML typing game.

### [8:28](https://www.youtube.com/watch?v=Ut64DWyaRBw&t=508s) HTML Typing Game

Exact prompt. Build a typing speed test game that shows random words to type. Tracks WPM and accuracy. highlights errors in red, has a 60-second timer, and displays results with a pixel art style interface. This was where Grock 4 inside Genenspark really shined. The game it created was fantastic. The interface had a clean pixel art design with retro styling that felt engaging. Random words would appear on screen for you to type. As you typed, it would highlight correct letters in green and incorrect letters in red in real time. The WPM calculation was accurate and updated constantly as you typed, how the accuracy percentage was displayed prominently and calculated correctly. The 60-second timer counted down with a clear display. When time ran out, it showed your final results with WPM, accuracy percentage, and total words typed. The whole experience was smooth and professional. The main Grock website completely missed the mark on this one. Instead of creating an HTML typing game like I requested, it gave me Python code. When I tried to run the Python code, it threw errors and didn't work at all. It seemed to misunderstand the request entirely. I wanted a web-based HTML game that I could play in a browser, but it gave me broken Python code instead. This was a complete failure to understand what I was asking for. Winner: Grock 4 inside Genspark. Main Grock completely missed the mark. Test seven. Particle system demo. Exact prompt. Create an interactive particle system where users can click to spawn colorful particle explosions, adjust particle count, speed, and gravity with sliders showing FPS counter, all with smooth animations. Both platforms delivered working particle systems, but with different approaches. Gro 4 inside Genspark created a functional demo where you could click anywhere on the screen to spawn colorful particle explosions. The particles would burst outward from the click point with different colors and sizes. The sliders for particle count, speed, and gravity all functioned properly and would adjust the particle behavior in real time. The FPS counter was displayed in the corner and showed accurate frame rates. The animations were smooth and responsive. The main Grock website also created a working particle system, but with a more polished feel. The particle explosions were more visually appealing with better color combinations and more natural movement patterns. The particles had more realistic physics with better gravity simulation and collision effects. The sliders were more responsive and the changes felt more immediate. The overall user experience was more engaging and fun to interact with. The FPS counter was more prominently displayed and the performance seemed better optimized. Winner, Main Grock website. Both worked, but Main Grock had better user experience. The score breakdown let me break down the actual results. Grock 4 inside Genspark won two tests. It dominated the data visualization test by a huge margin with professionallook output that was far superior to the main gro website. It also completely won the typing game test where main Grock failed to understand the request entirely. Mangrock website won four tests. It won the HTML runner game test after the feedback loop helped it fix initial problems. It completely dominated the algorithm implementation test where Grock 4 inside Genspark failed entirely. It won the memory game test because it actually worked while Grock 4 inside Genspark was broken. And it won the particle system test with better user experience and more polished output. There was one tie with the sliding puzzle game where both platforms delivered equally good results. Here's what I learned from these seven tests. Gro 4 inside Genspark excels at visual design tasks and modern web interfaces. When you need something that looks professional and aesthetic, Grock 4 inside Genspark delivers fast results that look great. The data visualization test proved this perfectly. It creates modern, stylish interfaces that feel current and professional. However, Grock 4 inside Genspark struggles with complex game mechanics and algorithmic challenges. The runner game had broken collision detection and jump mechanics. The memory game had broken card reveal functionality. The pathfinding algorithm was a complete failure with no visible output at all. Main Grock website excels at complex logic, game mechanics, and algorithmic tasks. When you need something that works perfectly under the hood, Mangro delivers reliable functionality. It also handles feedback much better and can debug and improve its own code. The running game example showed this perfectly where it fixed itself after feedback. Main Gro website struggles with modern visual design and sometimes understanding specific HTML requests. The data visualization looked completely outdated and unprofessional. The typing game request was misunderstood entirely, resulting in broken Python code instead of the HTML game I requested. The platform strategy that works based on these results. Here's the strategy I'm using now. For data visualizations, modern web interfaces, and anything that needs to look professional and current, I use Grock 4 inside Genspark. It's faster and produces better looking results that feel modern and polished. For complex algorithms, game mechanics, and anything that requires perfect functionality, I use Main Grock website. It's more reliable for complex logic and handles debugging better when things go wrong. This approach has transformed how my agency handles AI powered development. We're not fighting against platform limitations anymore. We're playing to each platform strengths in getting better results faster. If you want to scale your business and save hundreds of hours with AI automation like this, join my AI profit boardroom. We currently have 1,000 members who are implementing these exact strategies to get better results from AI platforms. Want a personalized strategy for your specific business? Book a free SEO strategy session. The link is in the comments and description. For step-by-step processes and over 100 use cases, check out the AI success lab. Link in the comments and description. You'll get access to checklists and tutorials, video notes, and training materials that 14,000 members are already using to transform their businesses. Comment below and let me know which platform you think performed better overall. And remember, Julian Goldie reads every comment, so make sure you share your thoughts.

---
*Источник: https://ekstraktznaniy.ru/video/6188*