where things get really interesting. Ling1T doesn't just reason, it designs. It excels at front-end code generation and visual reasoning thanks to a new syntax function aesthetics reward mechanism. That means it doesn't only produce correct and functional code. It produces code that looks good that aligns with user intent and that understands layout, color, and design principles. On Artifacts Bench, a benchmark for front-end and design intelligence, Lingi actually ranked first among all open- source models. And get this, the benchmark visualization shown on his hugging face card were generated by Linguenti itself. We're seeing the early signs of models that not only solve logical problems, but understand aesthetic structure and human visual preferences. On the screen, you can see me ask Ling to generate me a website for a clothing store specifically for gymwear. It's generating a HTML file for me that I can use. Let's see what it creates. All right, let me just paste my code in here. Um, okay. Looks okay. Uh, as you can see, the code is in here. It called itself the flexware. Um, there's buttons on there. Moisture wicking, four-way stretch, recycle materials. Oh, it even offered a discount. The collection button doesn't really work obviously. And then the other pop-up buttons don't work, but not too bad for what it has created. Okay. It even validates if my email works or not, which is pretty good. And there's some returns section. Okay, I think that's where it stops. All right, let's try another prompt uh with the new model. We're going to ask it a reasoning question, which is that Anna takes a ball, puts it in a red box, then leaves the room. Bob takes the ball out of the red box, and then puts it into the yellow box, then leaves the room. Anna returns to the room. Where will she look for the ball? So, the answer should be red box because Anna put it in the red box. And I'm going to ask the model to not only solve this, but show me what it's thinking and how it got to its answer. So, I'm going to ask, show me how you got the answer. All right, let's see. Okay, so I did get the answer right and I will look in the red box because that's where she last place the ball and believes it still is. And obviously she didn't see Bob moving into the yellow box. Okay, makes sense. Let's see how it reasons it. Number one, it says Anna's initial action places the ball in the red box. Her belief the ball is in the red box. Okay, that's pretty nice. Bob's action while Anna is absent. So it understands that Anna is not in the room. Whatever Bob does should not be clear to Anna. And then when Anna returns, she has no information about the ball's location. Her mental state still affects her last observation. Therefore, she searches where she thinks it is. The red box. This test theory of mind. The Okay, this is kind of cool. It's giving us a philosophy or like the reasoning it used to help solve it, which is the theory of mind. The ability to attribute false beliefs to others. Children under four years typically fail this task, searching the yellow box where it actually is. Success indicates understanding that others can hold incorrect beliefs about the world. That's pretty sick. What's fascinating is how this simple experiment reveals whether someone grasps that beliefs don't always match reality. Do you work with development psychology or is this curiositydriven? That I think is very cool and kind of scary why it's asking me that question. As we can see, this model is clearly smart and it definitely has a reasoning step-by-step situation in mind. So, it obviously is ready for reasoning tasks and harder level questions. Just for testing purposes, we're going to ask chat GPT the same question and let's see what it does. Okay, it's thinking. So obviously you did get the answer right. Basic question which is look in the red box. Uh how I got it. No spoilersy mind readading just facts and beliefs timeline and it puts the ball in the red box leaves and the returns. It does obviously walk us through how it did it, but obviously the other model did it much better in more detail and also had a follow-up question which I think was pretty cool. So, you know, obviously the answer is pretty straightforward, but we can see how both of these models think differently and reason differently and provide different answers at the end. In conclusion