# Run faster code reviews with deep research for GitHub

## Метаданные

- **Канал:** OpenAI
- **YouTube:** https://www.youtube.com/watch?v=ZWwquOvw5Bk
- **Дата:** 12.05.2025
- **Длительность:** 3:06
- **Просмотры:** 24,579
- **Источник:** https://ekstraktznaniy.ru/video/11305

## Описание

ChatGPT's deep research integration with GitHub can run code reviews. This demo walks through how ChatGPT analyzes pull requests, identifies potential improvements, spots bugs, and suggests fixes in minutes.

Built for software engineers, development teams, or product leads who want more efficient, and higher-quality code reviews.
🔗 Get started: OpenAI for Business [http://bit.ly/43exn7L]
❓ FAQs: Help center [https://help.openai.com/en/articles/11145903-connecting-github-to-chatgpt-deep-research]

## Транскрипт

### Segment 1 (00:00 - 03:00) []

As soon as deep research launched, we started hearing from customers that they wanted to be able to power this multi-step research in their own systems, not just on the open web. So that's when we decided to integrate with GitHub. Since I've already connected my account, I could select a single repository. But since the GitHub connection respects user permissions and I'm in a controlled environment, I'll just select all repositories. So, you may need additional IT assistance to interact with some repos if they don't appear here or during your initial setup. So, I'll go ahead and approach this from the lens of a developer that's adding new functionality, specifically the responses API to the OpenAI Python library. First, we'll take a look at the example diff in a commit for reference. So, you can see here quite the large change. Lots of information that's being added to the files that are already there. Maybe some additional files that are being added al together. So, I'll go back and drop in my prompt here. I'm asking for deep research to make sure that the addition of the responses API will go smoothly. So, I'm getting asked some follow-up questions about the format of the report. I'll answer those and then I'll go ahead and kick things off. So, you can see deep research starting its analysis, gathering available resources, collecting information as it goes. And if we actually open the right hand sidebar, you'll start to see this chain of thought gather here. And just like re deep research has done so well in the past, we actually see this multi-step research analysis, citation collection, all taking place inside of our own GitHub repo. So we can see that it's grabbing some documentation, reading through files, checking commits, and now we have our completed report. So, because it actually pulled in all of the context from our knowledge bases and reasoned over the best approach to the solution, this could take anywhere from 10 to 30 minutes, sometimes even more. But in this case, it took about 16 minutes, used 49 sources, and did 49 searches within that, which was just our GitHub repo. And throughout the report, we actually can see citations where we can start to uphold the findings of deep research with GitHub. So scrolling down, it looks like first we have our risk assessment. Later on we have some changes to streaming behavior. All great information that we can use to really make sure that this is exactly what's going to change in the responses API. Being able to perform finite and specific deep research in tools like GitHub will save your teams hours of internal research. We're really excited to see how this impacts your engineering teams.
