
In a statement that raised both outbreaks and questions throughout the technological world, Microsoft’s executive director, Satya Nadella, recently revealed that up to 30% of the company’s code is now written by AI.
Yes, you heard that correctly: almost a third of the code that drives one of the largest technological giants on the planet is not writing by humans, but generated by machines. That is almost as impressive as terrifying, so is it a sign of progress or a red flag with a bright LED?
Ai-AI-assisted coding tools such as Github Copilot (which Microsoft possesses), Amazon Codewhisperer and Google Codey are already reforming how developers write, try and implement software. These tools are trained in massive code and documentation sets without plane publicly and can self -prefer lines, generate fragments or even spit complete functions based on a few.
In fact, in the case of Microsoft, Copilot is now helping thousands of developers throughout the company, sometimes writing lines of code before they even had their second coffee. In a way, the world of writing code is almost democratization, decomposing the barriers of entry.
But is that good? While 30% is a part of Hijly, the most important question is how good that is 30%?
When Ai does it well
You cannot argue the fact that when AI is good, it is good. It can shine absolutely when it comes to the boiler code, the repetitive and numb things that most fear. Think about writing test scaffolding, generating configuration files or producing common algorithms. It’s like having a super speed intern that never sleeps and doesn’t need rest to drink coffee.
There is also a serious productivity gain. According to Microsoft, developers who use full co -driver tasks up to 55% faster, because they are cut to focus on more complex logic, architecture decisions or creation of real thought, so to speak.
And as for the new programmers, IA can also, as a learning tool, sacrifice suggestions that help them understand the structure, syntax and style without the constant review of the battery.
But what happens when AI is wrong?
Unfortunately, the code generated by AI is not always a victory. In the best case, it could be clumsy. In Sausage, it could be wrong, insecure or completely inappropriate for the task. Because the AI models are trained in fixed public code amounts, including defective, outdated or vulnerable examples, their production can inherit those same failures, children or how the bias that exists within the data sets is simply a use.
Security experts have warned that the suggestions of the trusted trusted could introduce vulnerabilities in the software. Remember, the AI does not understand the code you write, it is simply predicting patterns based on probability. That is fine when you are automatically completing a running, but not so good when you write the authentication logic or data management functions.
So it is the issue of responsibility. If an error causes a catastrophic failure and was written by an AI, who blames him? The developer who accepted the suggestion? The company that integrated the tool? The AI model itself, turning your virtual thumbs in the cloud? It may sound like a small problem, but this is a real aspect of the problem that must be consulted.
More artificial intelligence
Creativity versus completion
There is a subtle difference between the writing code and simply complete the code: it is the difference to do to do something copied and simply do so.
Much of what the co -drivers and similar tools do is the latter: they are based on self -refleting depending on the context. And that is useful in many situations, of course, but it is far from an architectar a system from scratch. The AI still lacks the nuanced reasoning, the knowledge of the domain and the problem solving instincts that the experienced developers contribute to the table.
In other words: AI can be brilliant to finish its prayers, but do not expect you to write your novel, at least not yet.
So should I write 30% or code?
The fact that it may be one thing, but should?
Here is the shameless answer: it depends on 30%. If AI is writing repetitive, safe and low things, progress. Developers can do more, learn from suggestions and spend more time doing what humans do better: to think about criticism, collaborate and solve great problems. It is more or less the same as we deal with the use of AI in other things like admin and Writing equally.
But, if we are talking about complex systems, sensitive to safety or missionaries of AI, you may want to pump the brakes. Tools such as Copilot are considered better as collaborators, do not replace. They can help, inspire and surprise, but ultimately, they still need adult supervision.
Microsoft’s 30% claim is a bold reference point. It tells us that AI is not only entering the development, it has its own seat at the table. But if you must remain for dessert it depends completely on how well we use it and what we ask.