Did OpenAI Just Help the Government Kill Anthropic?
29:32

Did OpenAI Just Help the Government Kill Anthropic?

TheAIGRID 28.02.2026 11 202 просмотров 502 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🎓 Learn AI In 10 Minutes A Day - https://www.skool.com/theaigridacademy Get your Free AGI Preparedness Guide - https://theaigrid.kit.com/agi 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Learn AI Business For Free AI https://www.youtube.com/@TheAIGRIDAcademy Links From Todays Video: 00:00 Political Threats 00:25 Pentagon Pressure 01:37 Escalation Fallout 03:19 Secret Deal 05:32 Fine Print 07:18 Trust Collapse 08:54 Talent Exodus 12:53 Cancel Movement 15:00 Public Support 16:13 App Store Surge 19:55 Celebrity Switch 22:25 Political Theater 24:22 Investment Shock 27:07 Is Anthropic Finished? Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (14 сегментов)

Political Threats

So, opening eye just did something absolutely crazy and we have to talk about it. So, one of the first things that I do want to talk about is what just happened to Anthropic because it sets right up to what opening I just did and the entire situation is connected. So, what you have to understand here is that Donald Trump came out on his platform Truth Social and made a statement that basically escalated this for everyone involved. He said that the United States

Pentagon Pressure

of America will never allow a radical left, and these are his words, not mine, by the way, a radical left woke company to dictate how our great military fights. That will be determined by our commander-in-chief. And he actually is framing Anthropic here as a company that has made a disastrous mistake, trying to strongarm the Department of War and force them to obey their terms of service instead of their Constitution. And apparently, their selfishness is putting American lives at risk. I'm going to get into where this leads to opening eye in a second, but you have to understand that this statement is completely false. Anthropic wasn't trying to override the constitution. They were enforcing their own terms of service as a private company, which is completely normal and legal. The putting American lives at risk framing is pretty much a stretch. And Anthropic has supported all lawful military uses, just not those two that we previously discussed, autonomous weapons and mass surveillance. Now the thing is that OpenAI ultimately have those same lines included in their teal which is undermining the entire narrative and the woke company framing is pretty ridiculous. But we have to understand the political landscape here. This plays

Escalation Fallout

well with Trump's base because of course framing anthropic as dangerous to troops. It's emotionally resonant even if it's factually thin and the threat of civil and criminal consequences like I said before is a significant escalation. The president threatening a private AI US company is pretty crazy. And think about this guys. If the government can threaten criminal consequences to force AI companies to comply, what does that mean for AI safety in the long run? If the government are just going to strip all the guardrails out, what does this mean for companies in the future? Now, of course, you can see here that him directing every federal agency, and this is pretty bleak if I'm being completely honest with you, directing every federal agency in the United States to immediately cease all use of anthropics technology. We don't need it. I would heavily disagree with that part and we don't want it and we will never do business with them again. There will be a six-month phase out period for other, you know, departments so that you can transition to other pieces of software. And they said that we will never let the fate of our country, not some out of control radical left AI company, be run by people who have no idea what the real world is all about. Now, this precedent is very insane. The government is essentially saying that private companies cannot have ethical guidelines when contracting with the military, and that opens a door to every AI company being coerced into removing guardrails or losing federal business entirely. And now, this situation could have severe consequences. But the craziest thing about all of this is that I remember that it was Sam Alman that literally got onto CNBC just a few hours ago to talk about the fact that the Pentagon and the government shouldn't really be threatening them and they should be reaching agreements. But then a few

Secret Deal

hours later, Samman actually turned around and reached a deal with the Pentagon. And this is, you know, one of the craziest things here because most people don't realize the trickery in this statement. And I'm going to show you guys why this is so important. And yes, of course, this tweet, I really do love this tweet because it says, "The concept of calling an AI company far-left because they're not allowing you to spy on American citizens and implement it into its war strategy. " It's pretty crazy, but like I said, so OpenAI, what is going on here? Well, of course, OpenAI said, "We've reached an agreement with the Department of War to deploy our models in their classified network. " And this is pretty crazy because you would have it, you know, be that you would have seen on the surface that it seemed like OpenAI, Google, Anthropic were all essentially holding the line so that they could reach some kind of agreement where of course they wouldn't be forced to have autonomous weapons with tools like LLM that is such a probabilistic nature that of course you wouldn't even want to deploy. And so the thing here, and it did look like at the start before I did more research, that this agreement that Samman reached was essentially the same agreement that Anthropic reached. But that isn't the actual truth. Because if you dive into the details, you'll see that unfortunately it seems like OpenAI kind of, you know, basically took the deal that Anthropic said they weren't going to take and made it look like it in the public. So you can see here that what they actually said was that of course in all of our interactions the department of war played deep respect for safety and desire to partner to achieve the best possible outcome. And here's the part where you know people might get confused but it says that two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force including autonomous weapon systems. The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement, and we will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. We will deploy feed, we deploy here, we'll deploy there. And at first, it might seem like, okay, that OpenAI managed to get this deal. What is wrong about this? But this tweet kind of

Fine Print

shows why their deal wasn't the same. And then of course a community note after shows you how OpenAI got the deal across as Anthropic weren't able to. So it says here that everyone's saying OpenAI got the same deal Anthropic was banned for, but if you read the fine print, they're not the same. It said on weapons, Anthropic asks for no fully autonomous weapons without human oversight, but OpenAI's deal says human responsibility for the use of force, which means someone accountable, which can happen after the fact. Now if Kale is right here and then if we take a look at the oversight to responsibility one requires a human before the trigger the other basically just requires a name on the paper and on the surveillance Dario said explicitly that the current law hasn't yet caught up with AI the government can already move your buy by your data you know move your data do whatever pretty much they want your browsing history all without a warrant and AI can basically assemble all of that data into a complete picture of your life at scale and that is mass surveillance without you know breaking a single law and anthropy wanted protections beyond current law. But OpenAI's deal just said it reflects them in law and policy. And that, you know, is basically saying, look, the law as is, but you know, the exact law is what Anthropic is saying is insufficient. Because whilst yes, it is legal, just because it's legal doesn't make it correct. So, we'd rather just put those safeguards in place. So, that of course, as the tech gets better, they're not able to do that. So I mean if Kale is right here and opening eyes deal is genuinely weaker humans responsible after the fact rather than before existing law rather than protections beyond it. So it looks like Sam Elman didn't just opportunistically take anthropics contract. He may have taken a watered down version of it and dressed it up with the same language to make it look equivalent. And that's the kind of thing that drives people insane. These

Trust Collapse

are the people who understand technical and legal distinctions between oversight and responsibility and they will read this thread and understand immediately what it means. You can't paper over that difference with careful wording in a tweet. And this is why I said that this entire thing done by OpenAI, it's only going to drive down consumer trust in this brand even more. Many individuals are stating that this actually may lead to a mass talent exodus from OpenAI to Anthropic. So it talks about the fact that until now it was possible to brush off Anthropic's principles as empty in promises which sounded good but in some sense they were still untested. Sure Anthropic had delayed certain features like giving Claude access to the internet while they pursued additional safety testing but it wasn't clear that approach would hold when the real pressure arrived. It says but now the answer is clear. Anthropic is an organization with principles, one that is willing to put their money where their mouth is. And Dario will stand up to an administration that has cowed every other notable tech leader into submission, including Sam Alman. And he says he's personally terrified of what AI will mean for the world. I'm not sure anyone, no matter how principled, can effectively steer the ship. But I do know that Sam and Open AAI definitively cannot. And so if I have to entrust the ship with anyone, it would be Daario along with all the incredible folks he's gotten to know at Anthropic. And this is the thing. We've seen individuals leave OpenAI before. We've seen the fact that Anthropic has the highest retention out of any of the AI labs. And it's because they are a company that I believe are trying to do the right thing. And of

Talent Exodus

course, when you're worth millions and millions of dollars or even potentially billions, it becomes very easy to turn a blind eye to certain rules, regulations, and just say, "Oh, under the guise of, you know, corporate speak or jargon or whatever. " But it's very clear that Anthropic does have at least some kind of ethics, and they're willing to stand by them. Now, like I said, the viewpoint of these AI companies is shifting, okay? And it's shifting in real time. And I have to say this doesn't bode well for OpenAI in terms of how much trust they've lost consecutively over the years. Sure, they may get the contracts now, but I think this is going to bold, you know, not really well for them in the future. I mean, you can see here that this tweet says, "At what point do we start viewing Open AI employees the way we view Palunteer employees? " And the Palunteer comparison is really sharp because Palunteer actually became a dirty word in that tech hiring cycles. There are engineers who genuinely won't take meetings with Palunteer employees who won't put them through referrals, who treat it as a black mark on your, you know, values. And that's a real social consequence that plays out quietly in an industry that is pretty small and interconnected. And the thing that makes this tweet, you know, sting specifically is the axious detail. The Pentagon accepted open eyes conditions, which were essentially identical to anthropics. The whole framing of anthropic was being difficult and ideological pretty much just collapsed. If you look at the entire situation, the government accepted the same terms from a company they politically preferred, which means that it wasn't about the terms at all, which puts OpenAI employees in a genuinely uncomfortable situation because their leadership just demonstrated that they will publicly claim to share principles, privately negotiate those same principles, and then take the deal the moment their competitor gets destroyed for holding the line. the employees who care about safety and OpenAI has many of those employees. They now have to reconcile working for a company that used Anthropics blacklisting as a business opportunity. While Sam Alman was on CNBC saying that he trusted Dario Ammed. Now whether or not this reaches Palunteer levels of stigma depends entirely on what OpenAI does next, but the seeds are planted. And like I said here, you can see that Sam Alman has already been community noted. So it says government officials have contradicted Samman's claim saying that OpenAI will allow the Department of War to use their models for all lawful purposes and this could allow for uses the Anthropic refused to engage in, namely mass surveillance tools and weapons with no human oversight. So this is why I say when you look at the entire thing here, you can see that many people are liking the moral collapse of OpenAI in one post. You have many individuals saying that I just cancelled my OpenAI subscription and upgraded to Claude Max. And I think that this is not a good look for OpenAI. Sure, they may get the contracts, but having the public rallying behind you is something that money cannot buy. And OpenAI time and time again have simply shown that they're willing to put profits over anything. And I do know that. Yes, they are a private company that under a lot of scrutiny from maybe even private investors. A lot of people in the VC world are wondering how on earth they are going to spend this money. But I think that, you know, you can do it in a way that actually achieves the goal without scorching your reputation in public. You can see here that another person tweeted that Samman is such an incredible backstabber, liar, and traitor. While your competitor is taking a heroic and principled stand, you swoop in to make your deal. Imagine working for this guy. Is there a greater shame? This should lead to a mass exodus from open AI. And you have to remember that this comes at a pretty bad time because if you weren't paying attention to the wider frame of how individuals view OpenAI, there was already a quick GPT movement urging people to cancel their chat GBT subscriptions. This

Cancel Movement

movement was born out of the fact that, you know, Greg Brockman, essentially the co-founder of OpenAI, he donated $25 million to Trump's campaign. And so many individuals were pretty upset about this. And so that movement, which was spawned out of that, has just continued to get fuel as the frustrations pile on. I've actually looked at their Instagram. The videos are getting millions and millions of views. And many people are rallying around the fact that number one, you know, ChachiBT don't seem to be a morally, you know, good company. Morally, they seem pretty bankrupt. And then when you take a look at the fact that other AI companies actually have even better offerings, I mean, the cycle just continues to repeat. Another tweet here says that I've been a loyal open AI guy from day one. I've had an account since they first released and I've used it daily since I think GPT 3. 5. Based on Sam Up's move tonight to accept the DoD's terms that Anthropic would not, I am removing all traces of codecs and OpenAI software from all devices. I kind of hate to do this, but it's going to really compromise my ability to be productive in the short term. But the alarm bell is going off full blast in my head right now. This has to be the most egregious incursion on privacy the world have may ever seen. And I think the Samman just saw too many dollar signs and didn't realize how obvious this move would make the optics. But it's clear to me now that OpenAI is compromised. Not sure what the next step will be from here, but the trust is totally gone. Very disappointing. And you can see here that confirmation by the administration that open eye contracts contain the all lawful use wording anthropic rejected. Sam's words smithing aside that means a door for Trump or a future leader to authorize autonomous weapons or mass domestic surveillance with AI is now here. Of course, this is not a really good situation. Now, if we take a look at what is going on, you can see that Anthropic is receiving a overwhelmingly amount of support. I mean when you take a look at this outside of anthropics in San Francisco there are many people who are writing on chalk saying thank you

Public Support

for standing up. Thank you. We love you. God loves Anthropic. I mean you cannot ask for a more supportive community around Anthropic. And I mean, they only have themselves to thank for this because standing up for American values and you know, I think those values that they're trying to say, those two things aren't that crazy considering the fact that when you actually look at the law, they're illegal anyways. It doesn't really seem that much big of a deal. But I'm guessing, like I said, at that level, when you're dealing with millions and billions of dollars, most individuals at that level just simply only care about the dollar signs. So when you have someone like Anthropic that are able to, you know, really show what it means to be someone who has ethics, especially morals, you can see how that translates into overwhelming community support. Post has a million views, 25,000 likes, and 4,000 retweets. And I of course retweeted this because I think it's important to support this company because they seem like the only company who are focusing on the ethics of everything. And maybe that might just lead them to win in the end. Now the thing here, okay, that I do want to talk about is how is this going to play out over the next couple of months.

App Store Surge

So you can see here that someone tweeted Claude just jumps to number two on the iOS app store up from 129 a month ago. Now on the free app section, usually it was dominated by Chat GBT and other apps. Okay. But Claude being number two is pretty surprising because like I said, it's been 129 a month ago. But the question that I want to ask is consumer revenue, does that actually matter for these large companies if many individuals stop using the products? Because when you actually think about it, okay, OpenAI's product offerings right now are pretty, you know, basic and pretty slim. And a lot of their money is, of course, coming from, you know, enterprise contracts, so you know, other companies. So when you have consumers that are essentially uninstalling stuff, most people may actually argue that look, the average user who pays $20 a month, that is not how they're making their their most money. The real money is, you know, enterprise contracts, API usages from businesses building on top of those models and of course government deals. And so I think the app store moment, it may not matter on the surface, but it might matter a little bit more because when a CTO is deciding which AI to build their product on, they want the one with cultural momentum, developer mind share, and of course brand trust. And consumer perception does feed into that and they feed into those enterprise decisions directly. The jump from 129 to two over, you know, a month, it really does tell you something deeper. There's a massive audience of normal individuals who are aware of Claude but never felt a real reason to use it over chatbt and today it kind of gave them a reason. Anthropic just became the AI that said no to the government and for a lot of people that is a lot more compelling regardless of whether the product is better which of course Claude is. Now the real question for Anthropic business is whether they can actually convert this cultural moment into developer adoption and enterprise deals before the new cycle moves on because the app store hype will fade. What lost is whether the company's building on Claude because they trust the brand more than they did yesterday. And if we're wondering is Opening Eyes revenue going to take a hit from everyone uninstalling the short term, I actually don't think so. Their revenue is actually dominated by enterprise and API. The consumer sentiment is basically noise at this point. But there is going to be a slow burn risk that people underestimate. The developers matter enormously. The people building startups, you know, like I said, those people matter enormously and those people have real opinions and they actually saw what Samman did today. And those people are going to be shifting their loyalty over the next 12 to 18 months, which is pretty dangerous for OpenAI in a way that angry tweets from normal users simply isn't. And the other thing is that is pretty notable is you know openi have survived massive public hatred before the Samman firing and rehiring was a complete circus that made them look like a complete dysfunctional mess. However, they did come out stronger commercially than ever. So there's a real argument that the business is genuinely insulated from the public perception in a way that almost no other consumerf facing company is. The uninstalls will spike today, but maybe they're going to plateau. Most people still might still use chat GBT because it's their default. They know and switching AI tools does have friction. Brand loyalty in AI right now is pretty weak across the board and people do tend to use whatever is most convenient, not whatever they morally approve of. That is just the reality. But of course, over time, I do know that one of OpenAI's key ways that they're going to make money is by selling hardware products. And if those hardware products have even a smidgen of failure, the entire world is going to go wild giving OpenAI negative PR. And that of

Celebrity Switch

course may affect their future sales, future cash flows. And of course, that is not good. I mean, it got to the point where literally Katy Perry said she's signing up for Claude Max, which is pretty crazy. So, I mean, when you think about who Katy Perry is, what her audience is, that's not the typical Claude user. That's mainstream America suddenly being aware that there's an AI company that refused to let the military use their tech for autonomous weapons. And a huge chunk of those people are going to find that admirable regardless of their politics. Now, if you're wondering how this situation panned out with Pete Hegsith and what he said, I mean, like I said, it wasn't just Trump, it was him, too. He said something completely crazy. And he said this week Robert delivered a masterclass in arrogance and betrayal as well as textbook case of how not to do business with the United States Pentagon. And it says opposition has never wavered and never will waver. The department of war must have full unrestricted access to anthropics models for every lawful purpose in the defense of the republic. Instead, Anthropic and SEO have chosen duplicity, cloaked the sanctimonious rhetoric of effective altruism, and they have attempted to strongarm the United States military into submission, a cowardly act of corporate virtue signaling that places Silicon Valley ideology above American lives. And they said that their true objective is unmistakable to seize veto power over the operational decisions of the United States military. I mean, I'm just going to have to stop reading this right here because the seas veto power over the military framing is doing enormous retical work here because it sounds terrifying, but it's also completely dishonest. Every single company that sells software to the government has terms of service. Microsoft, Google, Palanteer, Loheed Martin, they all have usage restrictions baked into their contracts. Nobody calls that seizing veto power over the military. Anthropic having two specific restrictions is not categorically different from any other vendor relationship the government has ever had. The effective altruism attack is pretty interesting because it's specifically designed to trigger a cultural reaction and people who associate effective altruism with Silicon Valley elitism. It's not really engaging with any argument at all. It's just attacking a label that makes anthropic sound out of touch and that they're tech bros trying to play god. I mean, the thing that genuinely makes me uncomfortable here regardless of where people stand politically is that Hexith says that the decision is final and then hours later they accepted the same terms from OpenAI. So, either the decision

Political Theater

wasn't final or the terms were never actually the problem. Both of those options make the statement looks like it was political theater designed to destroy a specific company rather than a principled stance about military readiness. And of course, the most damning for words in this entire saga are a more patriotic service because that explicitly tells every AI company in America that the government contracts are no longer about capability, safety, they're about political alignment. And that's pretty terrifying when you think about it. Now, Anthropic, they really do not care. They said earlier today, Pete Hex shared on X that he's directing the Department of War to designate Anthropic a supply chain risk. And after months of negotiation and we are deeply saddened by these developments, you know, but of course, Anthropic have still doubled down. And like I said, credit where credit is due. They said, "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court. " And I, like I said before, I have to give it to Anthropic. they are willing to go the mile here and I don't think what they're requesting is that crazy. Now, one of the biggest questions that many individuals are wondering and this is a serious question and it's pretty concerning that we're getting to this stage now and I would be really deeply unsatisfied and upset if Anthropic does not make it out of this alive. I think they will, but I do think that they're not going to come out of this unscathed. So if you see this tweet here from Dean W B and this tweet should genuinely you know put your mind in the state that wow this is a serious situation because this isn't some random person this is someone that is in serious AI policy circles saying out loud that the US government just made itself uninvestable for AI. So the tweet here is by Dean W. Ball and he

Investment Shock

says that Nvidia, Amazon, Google will have to divest from Anthropic if Hexth gets his way. This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor and I could not possibly recommend starting an AI company in the United States. And so think about it guys, this is serious, okay? The Amazon angle is one that nobody is talking about enough. Amazon has poured billions of dollars into Anthropic. And if the supply chain risk designation holds and defense contractors cannot do business with Anthropic, Amazon has a problem because Amazon Web Services says they are absolutely crawling with government and military contracts. They may be legally forced to choose between their AWS government business and their anthropic investment. And that is an absolutely enormous consequence that goes way beyond the Pentagon contract. Same with Google. Same with Nvidia. They're not small players who can quietly ignore a government designation. They're legal and compliance teams are going to be extremely nervous about this. And the broader chilling effect that he's pointing to is real. Every AI founder in America just watched the government threaten criminal charges against a company for having ethical guidelines in their terms of service. And that is the kind of thing that makes European and Canadian AI ecosystems look more attractive to both founders and investors who want the company's survival, not subject to political whims. And the irony here is that America was supposed to win the AI race partly because of its open market. But today it made the argument that authoritarian government control of AI companies is not just a China problem. Now remember guys, this isn't a random statement from a random purpose. Dean B is just not some random critic. He literally worked inside the Trump White House as the senior AI policy adviser. He helped write Trump's own AI action plan. This is an insider, someone who was ideologically aligned with the administration and someone who believed what they were trying to do with AI. And even he is saying that this crosses a line that he cannot defend. So, when the guy who helped build your AI strategy says he can no longer recommend investing in American AI companies, that is not a partisan attack, that is an alarm warning from someone with credibility on both sides of the debate. Now, is Anthropic finished? Okay, because this is the question that most people are arguing. If people can't invest in it, they can't do business with it if they're doing business with the government. How screwed are anthropic? I mean, you know, you can see this here tweets here says that Anthropic is finished because even if they win in court, it would take years and they would be financially ruined in the process. Investors would pull out

Is Anthropic Finished?

not wanting to get onto the bad side of the administration. Daario Amade destroyed his company today. And I mean, when you think about this, I don't think they're done because the Pentagon contract alone was worth up to $200 million. I mean, Anthropic did raise 7. 3 dollars in 7. 3 billion in 2024 alone. So, they're not going to go bankrupt just over this. But their biggest customers are enterprise and consumer. And of course, Amazon has a massive investments and partnership. And so, the military is just one revenue stream. But, of course, they're going to challenge that supply chain designation in court. And they may actually win since this designation is legally meant for foreign adversaries and not US companies. and other governments and EU and EQ e and other governments like the EU and the UK will likely prefer an AI company that stood up to autonomous weapons pressure. But, you know, this is genuinely going to hurt because being labeled a national security risk is reputationally devastating for enterprise sales. Every US defense contractor is now barred from using them. Massive ecosystem problem and it signals that doing business with the US government going forward could be very difficult under this administration. So, what do you guys make of this situation? I think that this is one of the most interesting times. I think OpenAI had just, you know, snuck in and just made it seem like they were supporting Anthropic and then they went ahead and just immediately sign that deal to, you know, further put Anthropic at odds, which of course I guess Anthropic and Open AI, they aren't exactly teammates, but at the same time, it is pretty concerning that now OpenAI are basically given the green light to say, "Look, American government, you can pretty much do whatever you want. " Now, some could argue that the American government were going to get their way anyways, so you may as well just sign it now. But at the same point, I mean, the entire situation is clearly political at this point in time. And I think as models get more and more capable, we're only going to see more situations where AI systems are going to essentially get more political because like I said before, intelligence is essentially going to be the same as weapons in the near future. Eventually, AI is going to become the most powerful weapon on the planet. And so whoever has control over that, they are going to be the ones who have the most power. And I know it doesn't seem like that right now, but if AI development does continue, I mean, look at the technology for the past 100 years, how much society has evolved, AI is going to clearly be political whether we like it or not. So, if you guys did enjoy this video, I'd love to know your comments and thoughts.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник