In the past couple of years, the world of AI has exploded dramatically, which makes it imperative for us to understand the risks of AI. The release of OpenAI and numerous other large language models (LLMs) makes artificial intelligence much more interesting and accessible for everyone. Although the term is not new, the concept was investigated in depth in the mid-50s and the 1990s.

Artificial Intelligence and Machine learning have been used since the mid-90s. Especially in research, I know some articles from the ’90s that speak about using neural networks in astronomical imaging. Nevertheless, it did not have the same success ratio as last year. Because the applications it was being used for were not as extensive as those discovered with LLM, it became a highly niched approach to specific problems.
What helped the AI boom
With the emergence of LLMs, the goal of AI has shifted from trying to solve complex problems to simply trying to understand human language. This approach allowed most “humans” to communicate and generate text with the AI. This had a broader application than using astrophotography or some other high-specialized problem.
This rise in the popularity of artificial intelligence generated a market boom, to the point that products that now have AI without generating a significant market difference but instead just become a “hot” product. The truth is that AI algorithms are pretty powerful when used with proper direction, but when vendors get a tagline for marketing, it doesn’t make a difference.
Simultaneously, the rapid market evolution has brought many companies to leverage AI in their daily operations to expedite their processes. However, when adopting these technologies, most companies can see the advantages of using some, such as incredibly generative AI, to streamline and improve their processes. Still, they do not stop to consider the risks.
Four Categories of AI Risk
These are some of the risks of AI usage, especially in corporate environments. A basic understanding of these risks will support better decisions and the implementation of additional safeguards.
- Bias and discrimination are significant risks for AI. Some of the most notorious cases were seen with the first image-generation AI engines. AI systems learn from the data they are trained on; if this data is biased, the AI can inadvertently reinforce and amplify those existing biases. For example, facial recognition technology or hiring algorithms may exhibit racial, gender, or socioeconomic biases if the data they were trained on reflects such systemic societal inequalities.
- Privacy Concerns: AI systems often rely on large datasets that include sensitive personal information. As a result, there is an inherent risk of misuse, whether through data breaches, invasive surveillance practices, or the unauthorized use and exploitation of private data without individuals’ consent. Moreover, it has been seen that some of the online Gen AI platforms allow some of their logs to be read from the internet, either willingly or unwillingly. This poses the question of whether we can trust these platforms when sharing information when working with PII or secret company information.
- Security Risks: This can be closer to what we have known as security concerns for most platforms. AI systems can be vulnerable to cyberattacks. Nevertheless, there is an added layer of security or security risk: the AI system itself. The Gen AI systems can be exploited through techniques like prompt injection to bypass hardcoded security measures and be able to retrieve data. Also, threat actors have been seen to use this AI maliciously, such as in understanding and crafting attacks for infrastructure autonomous vehicles or security systems.
- Lack of Accountability: When AI systems make mistakes or cause harm, it can often be unclear who is ultimately responsible—whether the developers who created the system, the users who deployed it, or even the AI itself. This pervasive lack of accountability complicates legal and ethical responsibility in significant ways.
The Future of AI in the Enterprise
We have seen increased demand for generative artificial intelligence in the past years. Those companies that jump to the races and adopt AI can take a competitive lead in the market. Nevertheless, doing so without considering the risks of AI can be a dangerous game.
I started to notice that some corporations have begun to demonstrate an interest in generating policies and acceptable terms regarding the use of AI. Also on the regulatory spectrum, in 2023, the European Parliament voted in favor of adopting the Artificial Intelligence Act, which, in its current form, bans or limits specific high-risk applications of AI. Regulations, as we have seen in the past, can prove to be challenging.
In conclusion, we must be cognizant of the opportunities AI can offer our companies while understanding the risks of adopting these technologies. And if we adopt them, we will probably want to keep an eye out to understand the new moves of the regulations for the next couple of years.