The path from AI to AGI and ASI is more than technological; it is profoundly human. The real issue is not if machines surpass us, but whether we prove wise enough to steer them responsibly.
Table of Contents
Artificial Intelligence (AI) has generated considerable interest not only among those researching it but also among the public. The use of AI tools like ChatGPT and DeepSeek has already become common, somewhat replacing Google as the primary search engine for answering queries and doubts. But there is nothing ‘artificial’ about ‘artificial intelligence’ as it is all intentional. Mankind has always sought tools and assistance to make life easier. So, AI can be considered as another tool for this purpose, in a way.
Artificial General Intelligence (AGI) is a natural progression of AI development, which is expected to result in human-like intelligence. It will be a software that can understand what is going around and act in an appropriate manner, just like a human being. While AI, which some also refer to as artificial narrow intelligence (ANI), is capable of performing specific tasks only, AGI will be able to analyse the situation and act like a human being. It is therefore also known as human-level intelligence.
Artificial Super Intelligence (ASI) is the next stage that is expected to take AI to a level beyond human-level intelligence. Just imagine, human beings creating something that is even more intelligent than themselves! Sounds impossible, but, as they say, there is nothing impossible in this world. If we look around, many things that seemed impossible eventually became possible. However, it is debatable whether the creators of ASI will be able to maintain control over it or lose it.
Benefits of AI
AI is already an integral part of daily life, simplifying tasks and enhancing services across various fields. It enables quick preliminary research, automates routine work, and supports early detection of diseases such as cancer. Platforms like YouTube and Netflix use it to recommend content, while its ability to process and analyse massive datasets far exceeds human capacity. AI tools also improve accessibility, assisting people having disabilities through speech-to-text, image recognition, and smart assistants. These examples highlight just a fraction of the benefits already in use.
If there are benefits to AI, there are also several problems associated with it. For instance, if the data is biased, the AI will also be biased; due to lack of transparency it is hard to understand how the AI arrives at a conclusion; there can be misuse of personal data collected for AI; AI may be used for phishing scams or cyberattacks; automation through AI may make many jobs redundant; and overreliance on AI systems can prove dangerous in life-threatening situations in the fields of healthcare and law-enforcement, for instance.
Also, we should consider whether AI is making us lazy and dumb. The same question arose with the invention of a calculator, or even a bicycle! Eventually, we discovered that these new tools and inventions made us more efficient and productive, as we could accomplish much more in significantly less time. These, and many other tools and devices, have become an integral part of our lives, and now we can’t do without them. Nobody wants to walk from Mumbai to Delhi, for instance, when the same distance can be covered through train or an airplane conveniently in a much shorter time.
So, in conclusion, we can say that AI can be used in both positive and negative ways, which is true for most other tools as well. Therefore, it is essential to establish guidelines and regulations for the responsible use of AI. Realising this, the governments all over the world are working on it. Their main concerns at present are data privacy, safety, transparency, accountability, bias mitigation, and oversight.
Last year, the European Union introduced the first-ever comprehensive AI law, the EU AI Act (2024), which categorises AI systems based on their risks: unacceptable risk, high risk, and limited/minimal risk. It focuses on transparency, human oversight, accuracy, and cybersecurity.
So far, there appears to be no single federal AI law in the USA. However, there is the AI Bill of Rights (2022), a non-binding framework that outlines rights related to AI, including protection from bias and the right to explanation. There are also state- and agency-level rules, such as FTC and NIST standards. Additionally, an executive order on AI (October 2023) has been issued, which mandates safety, testing, watermarking, and responsible development.
In India, there is currently no AI-specific legislation. The government’s AI policy, at present, is governed by the National Strategy for AI (NITI Aayog, 2018) that promotes responsible AI and the Digital Personal Data Protection Act (2023), which outlines how AI should handle personal data. Discussions are ongoing on the ethical use of AI and regulations for public safety and education, etc.
There are, however, some common regulatory principles governing AI across most nations, which include transparency, accountability, privacy and data protection, safety and robustness, and fairness and non-discrimination. Users must be aware that they are interacting with AI. Developers and operators are responsible for the outcomes. There should be compliance with the applicable data protection laws. AI systems must be safe for use in high-risk industries, such as healthcare and aviation. There should be no discrimination in hiring, lending, healthcare, or any other area.








