System design interview: Design Robinhood (with ex-Google SWE)

System design interview: Design Robinhood (with ex-Google SWE)

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (9 сегментов)

Segment 1 (00:00 - 05:00)

hey guys welcome to another system design mock interview I'm excited about this one because we have the man from one of our favorite channels it is “Jordan has no life” and it's Jordan himself how you doing Jordan? well fine thanks for having me on so just to give a quick introduction about myself like Tom mentioned I run a YouTube channel focused on system design interviews and in addition to that I also work as a software engineer I was previously working at Google and now I work at a high frequency Trading Company so hopefully I have a decent idea of what's going on when it comes to this particular design anyways I'll let it get back to him yeah great to have you on Jordan and yeah it is a good channel I know a lot of some of our subscribers actually requested you to have to come on here um if you haven't seen Jordan's Channel then do go check it out all right um should we get straight into the interview let's do it all right yeah so um Jordan has seen the question ahead of this um I wanted to try and kind of guarantee that we had a high quality answer that was as useful as possible for you guys Jordan the question I want to ask you today is how would you design a stock trading app such as Robinhood all right so yeah thanks for letting me get into it let's see if I can type at all okay that seems to work so I guess like you mentioned we're designing a stock trading app I'm going to use Robinhood since the that's kind of the one that I know about the most and so um you know I guess I'm gonna try and lay out ideally the requirements of something like this so the ones that I can think of are of course things like sending orders and of course you know keeping track of positions that a certain you know client or customer might actually have and also of course this is the one I want to focus on so I'm going to put two asterisks here showing the actual correct price of stock or options the reason for that being that I think when it comes to actually sending orders and maintaining your customers like actual Holdings that's kind of trivial you can do that pretty simply with a couple databases maybe you'll have to Shard them a little bit if you really do have a ton of users but for the most part you know all the exchanges have an API you're basically just forwarding their orders there I think in reality that gets a little bit more complex with payment for order flow and things like that but we're not going to touch upon that too much ideally so I guess my question for you Tom then would be what are the type of capacity estimates we're looking at basically how many users and how many stocks per user were you thinking yeah sure let's say 100 million users and around 100 stocks per user 100 million and 100 per user okay so I guess that gives me a couple of ideas here in the sense that you know we do have 100 million users so obviously we're talking not just a 100 million users but since we have 100 million stocks per user we're looking at now let's see what comes after a million I want to say a billionaire a billion oh my gosh and then 10 billion after that because it's a hundred so we now have 10 billion positions to basically keep track of that's a lot of data to obviously worry about when multiplying the two of those so yeah a lot of positions but I guess from here I am going to again talk about my main concerns because like I mentioned the part of this problem that I'm really hoping to focus on if that's all right with you Tom is actually showing the correct price of the stock or the option because I think that is kind of the real complexity here so my main concerns are I want each client device to have a minimal load because if each client device is connected to too many of our servers at one time we could be draining the battery we could be using too much data and that would you know be a problem and then also it would be great if um you know we weren't connecting too many systems to The Exchange you know limiting our systems with the exchange connectivity the reason being that you have to pay for that and if we're Robinhood we don't want to pay for too much of it you know we don't want to be collecting too much data if we can avoid doing it just keeping it within our system so yeah do you have any questions about just like a general design or anything Jordan works at a high frequency Trading Company so obviously has a really strong knowledge of how these kind of exchanges work you would not be expected to know all these things instead you need to ask the interviewer the right questions in order to understand how the system will need to work I like that Jordan has already clarified the area of the problem that he hopes to focus on and explains some of the issues that he knows are going to come up over the scope of his design perhaps he could have come up with

Segment 2 (05:00 - 10:00)

some of the capacity estimates himself but it's okay that you ask the interviewer instead let's get into it what's the you know what's the general breakdown of the system that you want to go with sure okay um yeah so I've kind of thought about having kind of three main pieces for the system and I guess I'm gonna write them down but then I'll probably introduce them in a reverse order so the first is some like user layer and these are going to be servers that talk to our phone slash computers and then I'm also going to have basically an exchange layer I can't spell sorry and I think these are going to be the servers that are getting exchange data and like I mentioned a really nice thing about kind of splitting these into their own layers is that we can scale them independently which is good because we want to keep the exchange layer as small as possible and we want to keep the user layer you know perhaps as big as many client devices there are to make sure that each client device is just touching one piece of the user layer but I'll get into that in a bit and then we've also got this routing layer so the first thing that I am going to try and cover at least in this video is the exchange layer and so for background for whoever's watching this because I think it's kind of important for something like Robinhood is that most exchanges are not publishing just like you know apple is eighty dollars what they're publishing is that apple is you know it's got one offer so someone wants to buy it for 79 78. 5 someone wants to buy it for 78 and then also on the other side of that someone wants to sell it for eighty one dollars 81. 3 dollars and someone wants to sell it for it I don't know 81. 6 dollars so the exchange is never actually giving you a price that um you know is going directly to the customer and what we see in Robinhood which is kind of nice it simplifies things for us is one unified price it's you know basically this guy right here and so as a result of that what we need to basically do is you know take the above hopefully you can see me highlighting stuff and we need to convert it and another kind of piece of that which makes this hard is what I just showed above is just one Exchange but in reality there are a bunch of exchanges that are publishing the same different data you know I think maybe this is less so the case for stocks I actually can't remember but you know for options exchanges you might have like 78. 7 and 78. 3 and 78. 1 and 81. 6 and so basically what this is known as is called like the order book and you know the point is there's all this data throughout all these different exchanges and we have to aggregate it somehow in order to provide just one price to our user and that's what we would call the nbdo and the nbbo is basically like what I'm gonna say is the best bid which is like you know the best thing on the left side here the best price that people are willing to buy for no one else is willing to sell for and then also the best offer and I think in my mind what I'd like to do is probably return to users the best bid plus best offer divided by two do you think that seems General generally reasonable yeah okay sure so yeah I mean there are many different ways we can show this we could do like a weighted average or something but for now I'm going to be focusing on just you know providing a really easy evaluation which is just the best bid plus the best to offer divided by two you know the average of the two in the order book and so the issue now is that we're getting all of this feed data per exchange and yeah I mean there are a lot of exchanges so if you have any issues with how I'm handling this so yeah how are you going to deal with that you know there's many exchanges when you're calculating the base price in your publishing layer what's your sure right so I guess the issue is you know we have a lot of tickers and when I say A ticker I mean like symbols you know like Apple or Google or meta and those are all the stock names and we've also got a lot of data and so we probably need some amount of sharding um but in my head like you know you need to make sure that all of the data from an individual ticker regardless of exchange is going to the same node so personally I would like to kind of Shard The Exchange layer by ticker symbol and then you know within one node we'll do some processing

Segment 3 (10:00 - 15:00)

and ideally what that processing is going to lead us to is getting our nbbo and taking the average of it so that we can give that back to our customers Okay so kind of what I was thinking from there and you know let me know if you're okay with me going down this path is it's not like a trivial task to actually get all of the data from these exchanges and like quickly republish it so would you mind if I did like a little brief Deep dive into the algorithm I was hoping to use in order to do that yeah please do okay so yeah I mean let me know if you think I'm going in the right direction but kind of tentatively my plan was to kind of have something like this so we know each Exchange is publishing data and you know on a node we need to know best bid and best asked best ask at all times and you know when a new best bid or best ask comes in from an exchange it means that we need to basically invalidate the old one and so what I was thinking is actually kind of maintaining like a doubly link so basically you know on one given node let's say just for Apple I would do the following I would say like exchange one is saying the best bid is eighty dollars exchange two one dollars and exchange three is saying the best bid is eighty two dollars so that means that you know the national best bid is eighty two dollars because that's how much someone is like willing to buy the stock for at the most price and so am I thinking from there was to basically you know we want to be able to get that highest number as quickly as possible and how I would want to organize these things is in a linked list right so shoot I can't draw it would be great if I had my iPad alas I don't um so you know we kind of make a linked list but the nice thing about this is that if we ever need to quickly compute the highest value we'll make sure that our linked list stays sorted so that 82 is always at the top however for our linked list to stay sorted we actually need to be pretty smart about how we organize things so I think what I would like to do is do a doubly can't spell link list with a hash map and the nice thing about this hash map is that you know if I say oh shoot a new piece of data from exchange 2 is coming in let's say someone bids 83 on Exchange two I could see that you know the node for 81 is here because it's a doubly linked list I could quickly remove 81. in constant time so this is O of 1 removes and then what I could do from there is I could shoot how do I scroll down more is the question I figured it out awesome okay what I could do from there after I've done that is I could then do an O of log n insert of my new node and in this case we would say okay we'll do 83 why don't we get rid of this guy binary search the remaining list and then put 83 where it belongs so now we've got 83 leads to 82 leads to 80 and we know our national best bid is 83 amongst all of our exchanges which is great and then we could do the same exact thing for the offers but obviously you know we want to see the lowest offer not the highest offer and then we could just quickly compute the average okay cool Jordan is a very good communicator and is giving a textbook performance of how to communicate in system design interviews the talks through his thought process with me very clearly he checks in at regular intervals to make sure I'm on board he's also demonstrating strong technical skills his knowledge of the exchange has allowed him to already get into complex considerations such as aggregating the large amount of data The Exchange puts out via a stateful consumer these may have some fault tolerance issues down the line so I'm curious to see how he approaches that sound okay yeah um yeah so once the base price has been calculated how are you then going to get it to the user sure okay so yeah that's completely fair question and definitely something that's a bit complex about Robin Hood in particular I think my kind of General thinking was and I mentioned this earlier with my concerns is we want to limit the number of connections per client the reason being that you know we could have every single one of our Publishers let's say a client has Apple Google meta the issue is these

Segment 4 (15:00 - 20:00)

are all different notes right we're sharding by these and so if a client has a hundred stocks that would mean they would have a hundred different connections to publish your nodes which is problematic for multiple reasons it's problematic both on the client end because you know they would have to directly communicate with those publisher nodes and that would be really bad because you're maintaining a bunch of active server connections whether that's either long polling or web sockets or service-cent events and you know that would either drain the device battery or it would potentially just you know use a ton of network data and also it would be bad for the publisher nodes because they're doing a decent amount of low latency calculations it would be really great if we could kind of decouple all of that logic so that's kind of where I was hoping my user layer would come in so okay um yeah I guess let me know if you want to ask anything there yeah sorry um so yeah how would you want them connected then sure so I guess I kind of alluded to this a second ago that we could be talking about you know either long polling right because we have let's say Let me bring out the drawing right we've got our phone can I draw here yes I can oh can I draw in there maybe not phone great looking phone nice job Jordan so we've got our phone and then in addition to that we've got kind of our user layer and I'll see if I can draw in here it would be awesome but I don't know that I can so user layer and so the question is you know how do we want these two connected my personal opinion I think because really we have three options right we can you know the user layer is constantly going to be publishing values and so that makes us decide between basically where is my cursor right now it's making us decide between websockets which is a real-time protocol it's making us decide between server sent events and it's also you know we have the choice to use long polling where we basically open an HTTP request on the user layer and then we maintain that for the whole session and yeah that's um an option as well personally I think I lean towards websockets the reason being that these are too directional and they also reconnect automatically and you don't have to send request headers every single time which is nice and the reason I care about it being too directional is that our phone is going to be you know phone is sending a ticker requests to user layer and the user layer in turn is going to be responding with basically you know the prices of the stock or the option and I think that's you know pretty important just in the sense that you know being able to make sure that connection is as fast as possible is going to ensure that we're getting as much data as possible and maintaining a low latency on our phone and also just by keeping one connection between a user device ideally and the user layer of our server I think that would be really useful to us and then ideally what I would want to do is you know it's important yet again that we actually go ahead and Shard our user layer because we have 100 million users and that's a lot of devices so in the ideal case scenario for sharding what we would have is you know we would you know we're horizontally scaling out we have many different user layer servers but in the ideal case all of the users or rather one user layer server would keep a ton of basically all of the Common Stocks between the users connected to them so for example if I had like um user one and user two and they were connected to the same you know layer server a ideally user one stocks intersect user 2 stocks would be very high would be large because you know we ideally want to be fetching the same stock prices on the same user layer servers in practice though I think that's just kind of hard to implement and so for that reason I think I'm just going to go ahead and propose a pretty simple round robin load balancing so I'm still thinking about the kind of eventual design that I'm going to draw up here and I

Segment 5 (20:00 - 25:00)

will draw that out in completion probably once I explain my thinking for the original piece um but yeah ideally I'm gonna have a load balancer between the phone and the user layer and that is probably just going to do some sort of round robin load balancing to make sure that a certain user layer server isn't getting overloaded by too many clients perhaps we could do that round robin in kind of like a weighted way where we consider how many different tickers that user layer server is already fetching because it's possible that we have like power users who get a ton of tickers that they're requesting but for the most part I think this approach should generally work um yeah let me know if you have any questions about the sharding on the user layer server yeah interesting um yeah I wanted to ask what if one of these user servers went down like how can you make sure this piece of the system is false tolerant sure um yeah that's a fair question so let's see let's write that out and I will drag down a little bit so okay we're talking about fault tolerance down which is definitely a tough piece here because we want our system to be redundant in the face of any hardware issues or networking issues um I think it's hopefully pretty clear on a phone when their user layer stops sending updates um what we could do is have our websocket connection occasionally send Heartbeats and if our phone stops receiving them then you know it's going to want to go ahead and just establish a new connection um yeah and I think that's pretty simple for the most part it can go through the load balancer it is definitely true that if a user layer server with a lot of users goes down that we could in theory have a Thundering Herd if we were to load balance poorly where all of our kind of phones try and reconnect to the different user layer server and it overloads it but I mean that's kind of the point of using round robin load balancing is that shouldn't happen and so I think that just by making sure to use Heartbeats And if we really wanted to we could even have like a zookeeper layer which keeps track of the status of each user layer server and that would help us to confirm that the user layer server is like actually down and it's not like an issue with the phone or something um but yeah I think using heartbeats ideally should give us a pretty good sense of um you know kind of making sure that all the devices are getting the updates they need okay Jordan could you explain to me how the user layer actually connects to the publishing layer sure so um yeah like I mentioned I wanted to talk about three pieces right here and I've spoken about two of them right now and then I'll draw it all out at the end we've got our user layer got our exchange layer and the third piece is this routing layer right here and so that is what I will now talk about and so my thinking for the routing layer was in theory you know you have let's say we've got X User servers and we've got y publisher servers or exchange servers let's call them that and what we know is that every single user server is probably connecting or at least use your server probably wants like at least 100 tickers right because we know it's connected to at least one User it's probably got many users in reality we could be looking at thousands of tickers you may want thousands of tickers or Integrity data cases and you know that's going to be problematic I think especially potentially for the publisher server if nothing else I mean the user server it may be problematic too but you know that's what we have the sharding for I think for the publishing server though the issue is you know let's say a thousand user or not even a thousand we have a hundred million users let's say out of our 100 million users every single one of them owns Apple or maybe 10 we could potentially have 10 million devices or 10 million user servers or something asking for Apple and if that happens that could bog down our publisher our Apple publisher and so I would prefer that not happen and ideally you know continue to make it so that all of our different layers of our service can scale independently

Segment 6 (25:00 - 30:00)

and so what I would like to do in this type of case is ADD kind of like an intermediary routing or communication or whatever you want to call it a layer and so I was thinking about how I would like to do this and I think a lot of people might end up just going with like a message broker here I think something like Kafka does make sense specifically because it's a log based message broker so all of the pieces of data that we publish would be persistent on disk which might be nice for future kind of logging or diagnosis or metrics or anything like that um and also it would be nice just because then you only have to publish to one place and you know many different consumers can get the message but truthfully the more I thought about it I actually think that maybe a better solution would be to use something like a redis cache and so again we can kind of scale these caches independently and you know use a load balancer as a zookeeper cluster to make sure of which cache is maintaining kind of the data for which takers but all this reddish cat redis cash is going to hold is actually going to be the you know like option valuation or the most recent one and so what it'll do is you know publisher says like publish your Tesla says hey the price for Tesla is now 450 dollars um after listening to The Exchange so it Exchange data comes in sorry you can't spell go to the publisher who publishes 450 who goes to the routing communication layer which again is nice because we can kind of scale replicate and do anything we need independently there and so that's going to be the routing communication layer for Tesla which is now going to update its value maybe it was you know 4 45 before and now it's updated to 450. and then yet again what you could do is use something like a server sent event or even a long poll and that's going to go to our user servers which then finally end up going to our client device um yeah let me know if you think that's okay and then hopefully we can almost get to the full-on drawing of the design okay yeah um I did have another question I wanted to ask about what can we do if a publisher goes down for sure um yeah that's really fair and that's probably the other big piece of fault tolerance is making sure that our intermediary routing communication layer and our actual um publishing layer are also fault tolerant for the routing communication layer routing there I think it's just a matter of using replication you know our Publishers can it's no big deal if each publisher is publishing to like three places instead of one it's a big deal if our publisher is publishing to you know a thousand places instead of one so I think using replication 3x or something like that totally fine um but as for the publishing layer the kind of interesting thing is again my solution is going to be replication however um kind of the interesting thing to note is that our Publishers are stateful so it's very important that every single kind of publisher or rather the backup for a given publisher is also receiving the same messages that are you know kind of primary publisher is because otherwise it's going to be the case that you know what we would see is one publisher goes down and then the other one is going to have to take a little bit to warm up and get data from the exchanges before it can actually start publishing out stock and option valuations so I think what that means to me is that personally I think we should use something called State machine replication here where rather than doing our typical form of replication where you know like one publisher would send all of its state to the next what I think we could actually do is have our backup publishers uh basically listening to The Exchange and ideally by listening to The Exchange getting the same type of information locally that the primary publisher would have and then for example if our communication layer receives no update slash heartbeat from our publishing layer slash exchange layer then you know we go to zookeeper we say oh shoot do we think this thing is down if it is down then we switch to our backup that seem okay

Segment 7 (30:00 - 35:00)

yeah Okay cool so that makes sense Jordan's almost 30 minutes into the interview and he hasn't yet started drawing we normally recommend that you want to be drawing your diagram after 20 minutes or earlier Jordan's taking another approach he's first talking through the various considerations in good teacher and as an interview I encourage this by asking lots of questions that means he now has a great foundation on which to draw his diagram but another interviewer might have wanted to see some design mapping earlier yeah please go ahead it was yeah I was hoping to you know we've kind of done all this background now and I think what I'm hoping to kind of do from here is draw this out all in one aggregated diagram to Really tie it all together and then you know if you have any questions from there feel free to ask me more but yeah hopefully this really ties things together now that we've kind of done a lot of abstract talking so I've got um what I'll consider to be my client device over here AKA a phone very simple and you know that phone is going to be showing things like apple is currently 81 Google is currently 200 and meta is 21. Okay so we've got our phone right here seems simple enough how do we make sure that these prices are actually getting to it well like I mentioned the first kind of aspect of it is that we want to establish some sort of websocket connection so I'm not going to use a direct Arrow here because these are two directional so the first thing is that well I guess it would have to effectively request to get a websocket in the first place and so what we would do is request some load balancer I think and I mentioned I wanted to do this in a round robin way so that our user layer isn't handling too much but we've got our load balancer and then ideally what this is going to do is you know maintain a connection with just one user server where from then on you know we can go ahead and perform more requests so just give me a moment to think this through but here would be an example of like user server one but you know we're horizontally scaling this out so we might have a user server X over here that we're not connected to in this particular case so for this guy you know this is going to be a websocket connection where one of the types of things that you might do is say like give me the price of Apple and then in response down the websocket chain you would say apple is I don't know eighty dollars so yeah that would be kind of the example of the user layer and ideally we could all scan this scale this out independently um now in addition to that we've kind of started building out our user layer my next goal ideally is going to be to build out our communication layer so again I mentioned these were like a bunch of different redis instances imagine that we have replicas and partitions right here but I'm going to just kind of write this out kind of freehand communication layer and so yeah again this is that is I think it doesn't have to be redis but the reason I like that in particular is it just keeps things relatively fast All rights and reads are coming in and out of memory it's effectively a cache sharded by ticker or some combination of tickers trotted by ticker and you know maintains connections for us um cool and then so that is effectively going to be publishing out to our user servers let me grab another arrow right here and the nice thing about this is kind of managing um all of our network connections so managing overhead of networking ideally perhaps this is a little bit over engineered but you know I'm gonna go with it for now and then the last thing that we've got of course is going to be our Publishers which are listening to The Exchange keeping data in memory and so this is going to be like Tesla primary and then we've also got um Tesla secondary and then we've got our Apple primary and we've got our

Segment 8 (35:00 - 40:00)

Apple secondary and these guys like I mentioned are all just listening to UDP feed which is coming from many different exchanges blah blah over here and qdp feed or rather I think I should mark this as exchange one because you know for example for options what I understand they're I think 16 or 17 different exchanges that you have to be aware of exchange too um change too and then also exchange three and then all of these guys are in turn going to be publishing to just about every single one of our kind of exchange layer right here and so it's important to recognize that you know we're able to kind of aggregate these results into not only just an nbbo but then one price that we can send to the rest of our application let's see over there hopefully you're starting to get the point at this point but you know everyone is receiving all of the data to do okay and then yeah from there in turn um we've got probably we need some sort of coordination service like a zookeeper which is going to you know keep track of all of our partitions it's going to be connected to everything has a sense of what tickers are located where and then of course we've got just our primaries at the moment publishing over here to our communication layer and then the last piece of this because even though I called it trivial I should still probably write it down is actually going to be um you know like in orders slash position service um so for this one in particular I think there probably would still be a load balancer in between them and ideally what we would want to do with this load balancer is use something like consistent hashing the reason being that you know if someone is constantly trying to request what stocks or positions they're holding it's probably not changed and so it'd be great if we could actually you know cache that on our orders and position service like for example if you know this guy was split into three different servers it would be great if for my particular user I was always going to This Server right here because then the result of my query would already be cached but yeah in the background I would imagine that for your orders and position service there's basically two endpoints one of which is taking your order to the Exchange to get a position and then you know upon receiving a successful response to that is probably going to go ahead and write to some sort of database see if I can draw that out I personally think MySQL is fine here I just you know there aren't that many interactions with this people on like a Robinhood app or not trading that frequently that I think your database is going to be your bottleneck I think it's more significant to figure out the challenge of providing base prices and I think something like um you know just Shard by user ID and then have a positions table which you know for every single given user has a row like user ID and stock ID and you know maybe some metadata like the price they bought it at or something but truthfully it's you know that's kind of semantics from there so yeah this is kind of the design I'm gonna stick with and perhaps double down upon um yeah let me know if you have any questions about it okay yeah interesting um yeah well I think I think it looks like a good design to me um I'm coming to the end of the interview so we're gonna have to wrap it up but before we do is there anything you'd like to just take a minute to look at it is there anything you'd like to refine or add to the design yeah for sure I am you know considering we don't have too much time left I'm not really sure how much I can refine at this point but I definitely think um you know if there are one area I'd like to think about a little bit more it would probably be this communication layer right here um I'm pretty confident that redis is a decently good technology to use here just because we really

Segment 9 (40:00 - 42:00)

only care about the most recent data point we don't need a full cue if for some reason you know a consumer of the data is delayed it's not important to get old data though I do just wonder if you know what the best way to kind of scale this thing out is ensure fault tolerance um you know kind of make sure that we're horizontally scaling out each layer separately independently in a good way that minimizes the amount of resources that we're using keeps our costs down so you know nothing specific to necessarily change about this part but I would probably just want to think it through a little bit more to ultimately make sure that I'm making this system as flexible as possible um but yeah I think that's about it okay well let's then end the interview there then I was happy with Jordan's design decided to wrap up the interview but another interview I might have spent a few more minutes with Jordan to dive a bit deeper into certain aspects perhaps I could have grilled them a bit more on the fault tolerance of the system because even though his approach there seemed to be generally good it might have been interesting to look at it in a bit more detail in any case Jordan made a smart choice in breaking a system into three different layers to scale independently while also considering the load that it would have on client devices overall a really strong design and a very impressive interview from Jordan great job that's great thanks for doing that how was it did you have fun of course always a great time making systems so no claims no issues there great well thanks for coming on and yeah hope we can have you back on here one day because I really enjoyed it and I think everyone will get a lot out of that so thanks a lot yeah absolutely and you know if you are watching this happen to enjoy the interview and want to check out some more feel free to go look at my channel I've got plenty on there and like I mentioned I have no life so there's going to be a lot more see you guys all right cheers Jordan hello I really hope you found that useful if you did you can like and subscribe and why not come visit us at IGotAnOffer. com there you can find more videos useful frameworks and questions guides all completely free and you can also book expert feedback one-to-one with our coaches from Google Meta Amazon Etc thank you and good luck with your interview thank you

Другие видео автора — IGotAnOffer: Engineering

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник