Late 2022 marked the launch of ChatGPT, and since then, the AI hype has become undeniable. (Generative) AI is now easily accessible to everyone. It feels like magic!
Texts are written in no time, and information seems readily available. Employees began experimenting enthusiastically, exploring ways to make their work easier. While there is plenty of excitement, there is also significant (and justified) concern. Questions arise: where does the training data come from? What does OpenAI do with the input data? And what is the risk of bias? These concerns are especially relevant for companies, as confidential data can easily be leaked through free online tools.
Addressing the biggest risks first
At the outset, some organisations immediately banned the use of such tools. Others established codes of conduct or provided guidelines on what is and isn’t permissible with generative AI, especially ChatGPT. In many cases, these measures were taken with the understanding that employees would likely experiment with the tools regardless. Organisations aimed to ensure work quality and prevent unintended leaks of confidential information to companies like OpenAI, where it is unclear what happens to the data. Most organisations have taken these precautionary steps by now.
Join the hype or risk falling behind?
Even after mitigating the major risks, uncertainty persists among organisations. Should they join the AI hype to stay competitive, or avoid generative AI entirely out of caution? The lack of user-friendly and responsible tools leaves many organisations stuck in decision-making around AI. Yet, now is the time for organisations to reflect on what they want to achieve with AI and why—independent of the available tools. Blindly following the hype without thoroughly evaluating this form of digitization is a common pitfall.
Expectations around AI have since been tempered. Generative AI is not a universal solution (though it sometimes is), as other types of algorithms may perform better, or explainability may be crucial in certain processes. Moreover, generative AI’s success depends heavily on effective user prompts and quality input (training data).
Clear strategy and policy are essential
An AI strategy is crucial for any organisation, providing clarity to employees about the organisation’s stance and guiding investments and developments. A strong AI strategy begins with a vision, centering on the key question: why does the organisation want to use AI in its work? What is the (expected) added value? Examples include increased efficiency, greater job satisfaction by automating routine tasks, or improving and innovating existing technologies. If such a vision cannot be clearly substantiated, AI might not yet align with the organisation’s level of digital maturity. Notably, an AI strategy can also be part of a broader digitalization or innovation strategy.
A well-defined strategy should also outline the organisation’s ambitions in AI. For instance, the organisation might aim to have all employees work in line with its AI principles. To achieve this and other ambitions, it is essential to identify which organisational values must be safeguarded when implementing AI. Consider values such as privacy, autonomy, independence, reliability, safety, or equality. Technology can be an excellent tool for realizing these ambitions, but it does not always have to involve generative AI. For example, if reliability is a high priority and no AI tools meet cybersecurity requirements, alternative solutions might be more appropriate.
Action plan
An AI strategy should align with (or be part of) the organisation’s digitalization or innovation policies. The next step is developing a concrete action plan that translates the strategy into objectives, outcomes, and activities. This might include drafting policies on AI usage or creating a proof of concept for using language models in the workplace.
Would you like to learn more about the opportunities and challenges of (generative) AI? Or discuss whether and how this new technology fits within your strategy, vision, or policies? Feel free to contact us or explore our services!