On this page, you can read more about AI and who you can contact to learn more
There are many definitions of the term artificial intelligence which has changed its meaning over time. Simply explained and in general, it is often about computers who are able to perform actions that were previously perceived as reserved for humans.
The National Strategy for Artificial Intelligence:
"Artificial intelligence systems perform actions, physically or digitally, based on interpreting and processing structured or unstructured data, to achieve a given goal”
Artificial intelligence is a field of research that dates back to 1956, and the technology has developed at various pace since then. Historically the media has paid a lot of attention to machines that win over humans in chess, machines that pass the “Turing test” (the goal of not being able to distinguish machine-generated communication from human), image recognition mechanisms (which have made advances in medical diagnostics, among other things), self-driving cars and now generative text and image models. AI technology has a much broader application than shown in the media, and is currently available in a wide range of applications.
AI as a research field has made some great leaps in recent decades. There are two major technological advances in particular that have made this possible. The first is that progress was made in how to build models that can be trained from large amounts of data. In addition, there have been developments in chip technology that have increased the amount of available computing power, thus making it possible to perform this training on enormous amounts of data.
In simple terms, language models are about predicting the probability of the next word or parts of a word in a sentence. This feature has been available on phones for many years, but it has not worked very well. There are many reasons why today's language models work far better. An important reason is that the probability of the next word is based not only on one, two or three of the words in advance (as the mobile phone often does), but a very large and ever-increasing number of words. Since our language consists of many filler words, not all words are equally important. One technology that has improved our language models is precisely about extracting the topic of the request and giving it more weight than the other words, and this technology is called transformer technology (the T in GPT).
Language models learn from a lot of text, such as books and articles. However, this text doesn't usually follow a "question-answer" format. To help the models become better at answering questions, they go through additional training specifically focused on this style after they have learned to understand and rearrange words.
This is done by two people simulating questions and answers. In addition to the training, they undergo "fine tuning", where the responses it generates are given "thumbs up" or "thumbs down" (this is also done by humans today), so that it can be constantly adjusted to respond in the desired way. This is one of the things that prevents the models from making horrible statements, even though much of the material they are trained on may contain just that. It is also this post-processing that means that it takes time from the machine has "absorbed" data until it is released, and therefore the models are not up to date on what has happened in the very recent past.
Text-to-image generation is complex models, but simply explained, the models are trained to remove noise from images.
The starting point is images that already exist. These images are added a bit of "noise", which makes the images more blurry. The machine's job is to remove the noise and thus recreate the original image. When it manages to do that, more noise is added. In the end, the image given to the machine is completely unrecognizable, but it still manages to recreate the original image. When it has become so good, you can submit one image that only consists of random noise (without any underlying original image), and use text to guide when the "noise" is removed so that you arrive at an image that did not exist before and that fits the description of the user.
This is the starting point, that the noise that is sent in is random, which among other things means that all the images are different each time, even though the instructions may be the same.
Expressions such as "black box" are often used when describing how a generative AI model works. It insinuates that we give the machine an input, then something happens in the black box that we don't know about, before it gives us an output. The input (e.g. question) and output (e.g. answer) are known, but not what happened at the time the output (answer) was generated.
There are people who make these models, and there are also people who train them, so why is it that we don't know how they work?
The answer lies in how the models are constructed. A lot of generative AI is based on something called neural networks. This is described as a network of number nodes in many dimensions, i.e. "boxes" that can contain one number with links between them. When the model is trained, it is these numerical values in the boxes that are adjusted up and down and eventually set.
It is difficult to imagine nodes in so many dimensions. But the point is that the result is just that. Lots of numerical values connected in multiple ways. It is a result that for us humans is not very intuitive to interpret. We therefore do not really know what the machine has "realised", "learned" or "thinks", we can only interpret whether the output (the answer) seems reasonable or not. If the outputs (answers) often seem reasonable after a model has been trained, we will think that we have a good model that has picked up good patterns in the data it has been trained on, even if we have not uncovered exactly which patterns the model has actually found.
To get started with a new AI solution, we should start with a problem, or potential for improvement. Then we must investigate whether AI will be the right technology to solve or improve, and if so, what type of AI we should use. Where there is potential for streamlining or optimising, AI may be the right technology.
If you have data available, or the opportunity to collect large amounts of data (typically in the form of numbers, text, images or video), it is a good starting point. If you suspect that if patterns are uncovered in this data, it will provide valuable insight and/or can help predict or generate future solutions, it may be a good idea to investigate the use of AI more closely.
A language model can be of a lot of help, but it's important to understand how to use it correctly. Here are some helpful advices:
-
Find a chatbot that suits your purpose
Ask your company for guidelines on which chatbots you are allowed to use. For private use, there are many solutions available, both free and paid versions. The most well-known is perhaps ChatGPT from OpenAI, you can use it for free here: https://chat.openai.com/
-
Start a new conversation for each topic
The chatbot stores information from previous conversations and uses this when generating new responses. Therefore, always start a new chat for each time it is not relevant to carry the history further
-
Enough words, clear and specific
Use enough words to be clear and specific about what you want help with. Ask for tone, format, style, length, and texture. Feel free to give the chatbot a "role", who it should answer as. And be clear about what you DON'T want
-
Follow-up questions
The chatbot typically won’t ask follow-up questions unless prompted. Therefore, requesting the chatbot to pose the necessary questions for generating useful responses can be extremely beneficial
-
Specify how it should think
It can be a good idea to specify what steps the model should take to find a good answer. Often, it can improve your answer if you simply ask it to think "step by step"
-
Critical to their own answers
Requesting the chatbot to review and enhance its own response can save you time and effort
-
Correction
Say what you liked about the answer and what you didn't like and ask it to try again
Generative AI isn't quite like other software – it doesn't come with a specific instruction manual. The best way to become familiar with the use of the technology is therefore through exploration.
A good way to get started exploring is to download a chatbot that is available as an app (e.g. AskAI from OpenAI) and put it on the start screen of your mobile, or where you usually have your search engine. In the weeks to come, try chatting with the app instead of searching in traditional engines like Google. It can be a great way to get used to interacting with generative AI, and allows you to explore the possibilities and limitations of these chatbots. Practice giving specific and clear instructions to get the answers the way you want.
Another advice is to use image generation tools. Here you can play around with generating images, and in that way quickly see if your instructions are good enough to give the model enough information to create expected outcome. How much do you have to write to get the image the way you want it?
Do you, your project, or your company want to enhance knowledge about artificial intelligence?
There is tremendous potential to increase productivity and efficiency through the adoption of AI. However, there are also significant limitations to the technology and pitfalls to avoid. Increasing knowledge about AI within your company can enable more people to leverage readily available tools, identify new opportunities, and simultaneously prevent misuse. A good starting point is to provide a basic understanding of generative AI, particularly focusing on the use of AI assistants, to a wide range of employees.
At Norconsult, we offer courses and lectures on AI. We focus on clear, simple communication that doesn't require prior knowledge, and our program can be tailored to suit your specific needs. Examples of topics for a basic introduction are provided below.
Generative AI
- What is AI
- The history of AI
- Why is it difficult to know what an AI model "has learned"?
- How are language and image models created?
- What is the difference between artificial intelligence, artificial general intelligence and artificial superintelligence – and where do we stand today?
- How will this affect society in the future?
- How will this affect my job in the future?
- How to recognise a good AI case?
Use of AI assistance
- What do early analyses say about how useful the models can be in the workplace?
- Why are the models often wrong and why should we not dismiss today's models even if we get generic or incorrect answers?
- Which models should I use in my work?
- What tasks can the model help me with?
- How should we use the models?
- Context, prompting, methods and ready-made instructions advices for "interaction"
Thomas Fløien Angeltveit
Leader of Digital Transformation
Eskil Elness
Head of machine learning and systems development