legal and practical considerations

Legal and Practical Considerations

When Integrating Artificial Intelligence Into Business Model

22 May 2023

Legal and Practical Considerations When Integrating Artificial Intelligence Into Business Model*[1]


Authors:  Ahmet DEMİRTAŞ & ChatGPT-4


  1. Introduction

Recent developments, especially in ChatGPT, have demonstrated the potential necessity and efficiency of using artificial intelligence in business models. The integration of ChatGPT or other Artificial Intelligence (‘‘AI’’) products into business models can potentially replace average-level employees. In other words, integrating AI into business models can have a similar effect to what happened to manual book writers when the printing press was invented. This situation looks like against the benefit of employees in the short term but presents an inevitable opportunity for the business community in the long term because the development and integration of AI into business models will not only enhance productivity but also reduce costs.  

However, before integrating AI into business models, several legal and practical issues need to be meticulously considered. For instance, it is essential to fully understand the business problem and identify the areas where AI can be effectively applied to develop ideal solutions. Additionally, AI models require high-quality and accurate data to generate reliable outcomes. Therefore, continuous training of AI using up-to-date and accurate data holds great importance.

Moreover, companies that aim to incorporate AI into their business operations must ensure the legal framework and compliance. AI, by its very nature, can introduce risky decision-making mechanisms and potentially cause harm to individuals[2]. For example; solely relying on algorithmic decision-making processes, loan applications, may be rejected, flight tickets could be purchased at higher prices or job applications may be rejected due to a single word in the resume[3]. Besides, the use of AI in healthcare, such as AI-powered cancer diagnosis devices, can have critical consequences if misdiagnoses occur. Thus, it is crucial for sectors to understand their potential liabilities and take adequate measures to mitigate or eliminate these risks. Apart from the aspect of liability law, there are other legal fields, such as intellectual property and personal data security, that may be relevant, but we prefer to exclude them from the scope of this article as they have been addressed in our previous writings[4].

  1. Technical Steps to Integrate Artificial Intelligence into Business Models

The first step for companies that plan to use AI is to determine the specific needs for which AI will be utilized. For example, an e-commerce company can employ AI technology to enhance customer experience on their website and increase sales. AI technology can analyze customer purchasing behavior and provide product recommendations. Considering the effectiveness of targeted advertisements based on customer behavior in presenting products of interest, the importance of using AI for this purpose in e-commerce companies becomes evident[5]. Additionally, AI can offer customer support services based on AI algorithms to address issues in the purchasing process and reduce return rates.

Another example is a manufacturing company that can leverage AI technologies to improve production efficiency, minimize errors, and reduce downtime. AI can utilize data collected from sensors on the production line to detect and prevent errors during the production process. Furthermore, AI technology can optimize the production process to enhance efficiency and predict machine failures.

Additionally, almost every company can utilize AI technology for CV screening during the hiring process[6]. In the healthcare sector, AI can be employed for disease diagnosis[7]. In the legal field, lawyers can benefit from AI in contract drafting and review, judges in decision-making, and prosecutors in preparing indictments[8]. The AI system called COMPASS in the United States, for instance, has demonstrated its potential effectiveness in assisting judges in assessing the likelihood of committing a crime again during parole decisions[9]. AI will also become an indispensable part of the cryptocurrency sector, where investors can enhance their business models and investments through algorithms that predict price movements. There are numerous other areas where integrating AI into business models can improve efficiency.

AI technology is a learning system, and continuous training is crucial when implementing it into business models to enhance accuracy and performance. AI models are trained based on pre-determined features and data, so when they encounter new data that they haven't been trained on, their accuracy may decrease.

Continuous training allows AI models to be updated, learn from new data, and improve their accuracy. For example, a customer support chatbot needs to be trained to respond to new questions from customers. Through continuous training, the chatbot can answer more questions and increase its accuracy rate. Another example is an e-commerce company that offers product recommendations to customers through AI. The AI model needs to be trained and updated based on customer preferences to provide better service. Therefore, determining the specific needs in the business model, selecting the appropriate AI technology, and ensuring continuous training to enhance AI performance are important factors for companies considering the use of AI.

  1. Legal Framework and Compliance Process for Integrating Artificial Intelligence into Business Models

AI, along with their enormous benefits, also brings various legal risks that can lead to undesirable harm. Incorrect decisions made by AI can result in injuries, fatalities, exposure to discrimination, or invasion of privacy[10]. It is precisely due to these risks that the European Union has categorized artificial intelligence technologies based on their risk levels and has directly banned certain AI technologies in the Proposal for the Artificial Intelligence Act ("the Act Proposal"). For example, according to Article 5 of the Act Proposal, AIs that have the capability to manipulate individuals using subliminal techniques, exploit individuals' physical or mental weaknesses, engage in social scoring, or perform real-time remote biometric recognition are prohibited[11].

In China, a scoring system is implemented based on individuals' social behaviors to determine their social status. This scoring system takes into account various factors such as individuals' behavior, financial history, reputation, employment, and education status, and assigns them a score. This scoring system utilizes artificial intelligence technology to monitor individuals' behaviors. For instance, the score of a person who violates traffic rules may decrease, while those who assist others or contribute to the betterment of society may see their score increase. The Proposal for the Artificial Intelligence Act has banned such applications. Similarly, the Act Proposal prohibits companies from specifically targeting alcoholics with alcohol advertisements, as this falls within the ambit of exploiting an individual's vulnerability. Targeted advertising, the use of AI in areas affecting health, safety, and fundamental rights, has been classified as high-risk and is subject to various regulatory obligations[12].

In legal doctrine, the authors have stated that tort liability may arise concerning the damages caused by AI. Particularly in high-risk areas, it has been argued that there should be a liability similar to strict liability or liability for dangerous activities, in the event of injury or death[13]. In other cases, it has been noted that the application of traditional tort liability based on fault is possible. In any case, regardless of the debate about the basis of liability, it is unquestionable that there is potential liability for those who create and use AI for the damages caused by it. For example, if a self-driving vehicle using AI technology were to hit and cause the death of a pedestrian due to a software error, the responsibility would lie with the company that developed the software. Similarly, a company using an AI program for CV screening that applies discrimination against women would inevitably be held responsible. Amazon, for this reason, faced various lawsuits and chose to discontinue the use of AI technology that favored male candidates in hiring processes[14]. Another example would be if an AI system diagnosing diseases makes a wrong diagnosis due to incorrect data input by a doctor, leading to harm to the patient. In this case, the doctor's liability would come into question. In our view, when determining liability, it should be investigated whether the wrong decision was due to a user error, a software error, or any other error in production, and liability should be determined accordingly, taking into account the actual cause of the damage.

In addition, this liability may not be solely based on tort law. For example, if a law firm uses AI technology for contract review and their AI system provides incorrect legal advice due to failure to update the information or inadequate training of the AI, there could be a breach of the attorney-client contract, leading to contractual liability. Therefore, it is not necessary to confine liability arising from AI solely to the border of tort law.

  1. Measures to Reduce Liability Arise Out of Artificial Intelligence

At this point, human intervention can be used as a method to reduce the liability of artificial intelligence systems. AI systems can produce unpredictable outcomes, and human intervention is necessary to monitor the system's results and respond appropriately. Additionally, AI systems may sometimes make biased decisions, and this can be fixed through human intervention. Particularly in critical areas where AI systems are used (such as healthcare, security, transportation, etc.), incorrect decisions can lead to significant harm. Therefore, the decisions of AI systems in these areas should be closely monitored. There is no doubt that risk assessment should be conducted before implementing any monitoring measures. In cases where there are higher risks involved, the frequency of monitoring can be increased.

Furthermore, the decisions made by AI systems depend on the quality and accuracy of the data used. Therefore, improving data quality can help AI systems make accurate decisions and prevent liability from arising.

  1. Conclusion

Integrating artificial intelligence into business models currently appears to be a commercial decision left to companies' discretion. However, companies that have already made AI into integral part of their business models will benefit from early AI training and will be better equipped to address potential legal compliance issues. This situation demonstrates the importance of early integration of AI into business models, as it can give these companies a competitive advantage over their rivals.

Also, companies that aim to compete with their rivals will inevitably have to leverage technology and incorporate AI into their systems. However, due to the ongoing development of AI law, there are still some legal uncertainties. Therefore, it is crucial to closely follow legislative processes and implement various measures, including human intervention, to mitigate potential legal issues caused by AI. Companies should evaluate the position of the AI technologies they plan to use in the face of current developments with their compliance teams; should develop in-house policies that will reduce the legal risks for the use of artificial intelligence. We believe that the use of AI in many sectors is inevitable. Companies that can produce high-quality products at a lower cost are the ones that survive in the economic landscape, while those who fail to keep up with technology become obsolete. Therefore, taking immediate action and making AI programs an integral part of a company can propel them to the forefront in the near future.

If you are developing any AI product and you think you may have overlooked legal issues, you can always contact us.


[1]    This text is written together with GPT-4

[2]    Maja Brkan, ‘’Do Algorithms Rule the World? Algorithmic Decision Making in the Framework of the GDPR and Beyond’’ (2018), p.3

[3]     Erdem Büyüksağiş, ‘‘Yapay Zekâ Karşısında Kişisel Verilerin Korunması ve Revizyon İhtiyacı’’ YÜHFD, C.XVIII, 2021/2, s.535

[4]    Ahmet Demirtaş, ‘‘OpenAI-ChatGPT: How It Shakes Intellectual Property and Data Protection Law’’, 2023 < > accessed: 18.05.2023

[5]  Claire M. Segijn, Iris Van Ooijen, ‘‘Differences in Consumer Knowledge and Perceptions of Personalized Advertising: Comparing Online Behavioural Advertising and Synced Advertising, Journal of Marketing Communications, 2022’’, p.211

[6]   Jihad Fraij, Laszlö Varallyai, ‘‘A Literature Review: Artificial Impact on the Recruitment Process, 2021’’, p.116

[7]    Bill Gates, ‘‘The Age of AI Has Begun, 2023’’, p.4

[8]    Fuso Jovia Boahemaa, ‘‘The Impact of Artificial Intelligence on Justice Systems 2019’’, s.10

[9]    Rasoul Amirzadeh, Asef Nazari, Dhananjay Thirucady, ‘‘Applying Artificial Intelligence in Cyrpto Markets: A Survey 2022’’, p.22


[10]     European Commission, ‘‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust 2020’’, p.1

[11]  Gabriele Mazzini, Salvatore Scalzo, ‘‘The Proposal for Artificial Intelligence Act: Considerations Around Some Key Concepts 2022’’, p.24

[12]   Ibid, s.25

[13]  Baris Soyer, Andrew Tettenborn, ‘‘Artificial Liability and Civil Liability: We Need a New Regime? 2023’’, p.10

[14]   Frederik Zuiderveen Borgesius, ‘‘Discrimination, Artificial Intelligence and Algorithmic 2018’’, p.25