Blog Layout

Generative AI Trends in 2024: Key Investment Areas for Companies

Nina Habicht • Jan 01, 2024

This article provides companies an overview of the major trends and areas to watch out in 2024 and the upcoming years.


Top 1: Autonomous Agents


Tencent recently released a framework called AppAgents that can operate mobile apps. We will see a trend towards agents taking over tasks for human in AI products and applications.


Autonomous agents are systems or software programs capable of independent action on behalf of their users or creators. Their functionality is rooted in AI and machine learning, enabling them to make decisions and perform tasks without human intervention.


For example, instead of just prompting generative AI systems to create an image, then a logo, etc. to finally develop a website, the AI system creates directly subtasks that create a logo, check if a domain name is available and help create a website.


Top 2: Adoption and Governance of Generative AI Tools


We will see a more widespread adoption in Generative AI in 2024: "We know that nearly two-thirds of employees are already playing around and experimenting with AI in their job or in their personal life," according to Techtarget. "We really expect that this is going to be more normal in our everyday business practices in 2024."


However, governance issues such as data policy, cybersecurity, and model compliance remain unresolved and are top priorities for 2024 in the field of Generative AI. Although companies like JPMorgan, Samsung, and Chase have temporarily, or at least partially, banned the use of the ChatGPT Plus version, new forms of application will be required.


One potential solution is the "Bring Your Own AI" approach, where data is intended to be accessible locally or in sandbox environments. This would enable companies dealing with sensitive data to experiment safely.


Contact us if you want to find insights in your company data without sending sensitive data to OpenAI.

Top 3: Open source Generative AI models on the raise


Since closed environments like OpenAI are expensive and because SME often do not have the resources to create own data models or products we will see an increase of usage of free, open source models. Models like Vincuna, Mixtral (which outperforms the LLaMA 2 family and the GPT3.5 base model. Mixtral matches or outperforms Llama 2 70B, as well as GPT3.5), Flamingo (OpenFlamingo is an open-source version of DeepMind’s Flamingo model, built on top of the LLaMA large language model) will be more and more integrated in AI products. HuggingFace also provides some great Generative AI models. Here you find a comprehensive comparison between different llm models.

Top 4: Generative AI on hardware devices and smartphones


Companies such as Apple and Google are developing AI assistants and offline language models (LLMs) that reduce cloud costs and perform faster than purely cloud-based solutions. We can expect more personalized and informative results owing to the integration with personal devices, a point also highlighted by Qualcomm Technologies. The combination of offline LLMs with efficient, locally running LLM models is highly sought after in industrial applications


Top 5: Multimodality and voice interfaces


Multimodality, which allows for the conversion of one media form into another (e.g., text to image, speech to text, text to video), is expected to gain traction in 2024. While GPT-4-Vision already has the capability to interpret images, we anticipate a broader application in business contexts this year. Some notable use cases include the analysis of medical research and diagnostics (explore this further in our "Generative AI in Healthcare, Medicine, and Pharmaceuticals" trainings), and the generation of manuals from computer-generated designs, such as CADs, architectural plans, and electrical engineering projects, among others.


Voice assistants are poised to become integral in customer care center operations, automating numerous industries where waiting on hold is currently the norm. This includes sectors such as hospitality, medical centers, banking, insurance call centers, restaurant reservations, and the entertainment industry.


Top 6: AI Influencers and Avatars for business cases


Influencers will increasingly be augmented by machines, or at least will utilize tools to create content and movies more quickly. Find more here. Furthermore, digital avatar platforms will replace the labour-intensive creators market. Additionally, digital avatar platforms are poised to transform the labor-intensive creators' market. Industries that will be impacted include movies, gaming, music (e.g. song generation), modeling, advertising, education, and content creation (such as YouTube, TikTok, and social media professionals), as well as consultancy firms. These sectors will increasingly automate their work using intelligent avatars to reduce production costs and enhance efficiency.


Top 7: Advanced Robotics and Gaming with LLMs


Advanced Robotics that combines robotics with the power of large language models and are commercially available could be coming soon. Some examples are: OpenAI's NEO, Optimus from Tesla, and others. Also in gaming, we will see more natural language interactions in conversations with characters. Unreal engine already provides plugins to build intelligent characters within games.


Top 8: Generative AI leads to new job market and wage decrease


The shift towards greater automation is likely to result in a long-term decrease in wages, as labor-intensive tasks (such as programming, consulting, content creation, and management) can be executed more efficiently. Next year, we can expect to see more changes in job roles, maybe not yet salary levels due to the impact of Generative AI.


Top 9: Upskilling and openness to Generative AI will positively contribute to retaining and attracting talent


Companies see the urge to train their employees as existing jobs within companies change. People need to understand how to use Generative AI in a compliant, safe and productive way. Companies that embrace the new technology and upskill their employees will win in the longterm because of the following reasons:


1) Creation of new business value and models

2) Efficiency gains thanks to more automation

3) A culture of change and experimental openess

4) Through 3) attract more talents than companies who do not

5) Provide career paths and new options for their employees to grow


Top 10: Process Automation meets Generative AI


Thanks to the development of more use-case-specific language model (LLM) models, MLOps, and LLM flow builders, enterprises will be able to integrate their operational and business processes with advanced Generative AI technology. The year 2024 is certainly when we will witness many startups and companies delivering significant value by improving process automation through the use of Generative AI.


We support your company to analyze your company data, automize and create value from day one.



How schools and universities can use Generative AI
By Nina Habicht 29 Dec, 2023
universities and schools need to change learining approach due to generative AI. How schools and universities can use Generative AI
Supports with the definition of GPTs, alternatives and options to build own chatbots or assistant
By Nina Habicht 25 Dec, 2023
A comprehensive Guide to Alternatives of GPTs and Assistant API from OpenAI
By Nina Habicht 26 Nov, 2023
Many companies are reluctant when implementing llm-based products because they fear bein confronted with high costs. Especially for medium-sized companies which have not the ressouces or enough capacity to deploy and oprimize their AI models nor to set up an own infrastructure with MLOps. As described in our article about sustainability of Gen. AI applications , cloud and performance costs of running an llm can become very high. What are the cost types when implementing OpenAI or other llms? T here are four types of costs related to llms: Inference Costs Setup and Maintenance Costs Costs depending on the Use Case Other Costs related to Generative AI products What are inference costs? An llm has been trained on a huge library of books, articles, and websites. Now, when you ask it something, it uses all that knowledge to make its best guess or create something new that fits what you asked for. That process of coming up with answers or creating new text based on what it has learned is called inference in LLMs . Usually, developers would call a large language model like GPT-4. But here comes the "but": usually not only large language models account to the total costs when running the final product. To explain: LLMs can be used to classify data (e.g undestand that the text talks about "searching a new car insurance"), for summarization, for translation and for many other tasks. Download the ultimative Gen. AI Task Overview to learn where llms make sense.
Checklist to implement Generative AI in your company
By Nina Habicht 24 Nov, 2023
this article helps companies like enterprises and sme to successfully implement generative AI by providing best-in-breed frameworks.
By Nina Habicht 01 Nov, 2023
In this blog you will learn about the alternatives to ChatGPT and OpenAI. Where is Bard better than ChatGPT? Bard is the response to OpenAI's ChatGPT. What makes Bard so different to OpenAI? It is free! So you can try it out here whereas ChatGPT costs $20 per month. Another advantage is the microphone on the desktop version to directly speak in your question and get a response. Bard has internet access whereas ChatGPT you need to jump from one service (Web Browsing) to the other Bard covers far more languages (265 as of October 2023) Some drawbacks: it is not able to generate pictures. With ChatGPT DALL E-3 you can generate pictures. Bard only offers you a nice description. Where is Claude better than ChatGPT? Claude is the version of ChatGPT developed by the company Anthropic. This tool is currently accessible only in the UK and US, and not yet available in Switzerland. You might consider using Nord VPN to explore its functionality in your country. Claude has one big advantage to ChatGPT: It can process more "context" ( Generative AI from A to Z ), meaning the input token (100 token equals around 75 words) can be up to 100'000 tokens (75'000 words!). GPT-3 has a limit of 4096 tokens (3072 words) and GPT-4 of 8192 tokens (= 6000 words). So when you want to upload huge files, use Claude.
By Nina Habicht 30 Sep, 2023
In this blog you will learn the slice and dice with Generative AI when it comes to the analysis of your PDFs, excels, CSVs, and more. Learn the first steps on how you can visualize even data with advanced prompt engineering. This article is very useful for analysts, reporting specialists, controllers, and marketers who have to generate reportings and summaries on a regular basis and want to do it more efficiently. Join one of my courses in Zurich to get 1:1 support or send my team a message. What are important analytics use cases with Generative AI? Generative AI can be used to detect patterns and provide ideas when it comes to data visualization. To name some important use cases: Ask and summarize pdfs Analyze sheets with numbers (Web, Sales, News) Extract PDFs, transform them, and query for specific data Generate reports (e.g. make your controlling analysis, generate excel charts) Generate social media posts Generate ideas on how to analyze and visualize data with Generative AI  What is ChatGPT Advanced Data Analysis? This tool was formerly known as "Code Interpreter". Now, it comes with the brand new name "Advanced Data Analysis" but still many people do not know its power and capabilities. So continue reading ... Where can I find ChatGPT Advanced Data Analysis? 1) If you cannot see this option, go to "settings" (left corner of the main dashboard) and activate all Beta release options 2) The availability sometimes is depending on the device and operating system you are working on (iOS, Tablets) 3) Contact the OpenAI support in case you cannot see the option.
By Nina Habicht 17 Sep, 2023
With the advances in Generative AI companies should consider how to be "on top" when it comes to new technologies such as ChatGPT and Generative AI assistants. While Google SEO was one of the main drivers before the launch of ChatGPT by OpenAI, today companies should optimize towards the next-generation of search engines based on large language models and advanced search models. Why are Generative AI searches so intelligent? Intelligent searches basically understand the semantics in a sentence. So old index-based searches used to be sufficient for keyword entries but did often fall short when users did not enter exact the correct keywords or did spelling errors. Also AI based searches with older NLP classification models before the launch of OpenAI were handling some of these challenges but did never achieve the level of intelligence of so-called embeddings. With new vectorbased embeddings ( which are also available by OpenAI ) companies can build intelligent searches that can give to their users the best solution across all their websites. What are embeddings? Basically you can think of a vector represenation. A user question is translated into a vector representation and this vector is used to query a vector database. Think about when you play with your toys. You might group them together based on what they are or how you use them. You might put all your cars together because they all have wheels and can move around. Then you might put all your action figures together because you can play make-believe with them. And your board games would be in another group because you play them on a table. This is similar to how vector embeddings work in computers. But instead of toys, we have words. Just like how you group your toys, a computer groups words that are similar. For example, words like "cat", "dog", and "hamster" could be in one group because they are all pets. But how does a computer know which words are similar? It learns from reading a lot, like how you learn from playing and studying. If the computer sees the word "dog" being used in similar places as the word "cat", it will think these words are related and put them close together in its group. So, in the end, vector embeddings are like a big, organized toy box for a computer, but with words instead of toys. Just like how you can more easily pick a toy to play with when your toy box is organized, a computer can more easily understand and use words when they are nicely grouped by vector embeddings. How can companies create their own Generative AI search? OpenAI did not only change the way companies could search with ChatGPT but it also changed the way companies can create their own page search. There are several paid and open-source models (embeddings from HuggingFace or OpenAI and Meta) to create your own intelligent search. If you need support with the development of your own search contact our team . This graph from TheAiEdge.io nicely illustrates how embeddings work:
By Nina Habicht 17 Sep, 2023
This article gives you a great overview of the most relevant use cases for generative AI. What are the most relevant Generative AI industries? Based on Citi Research, the financial and consumer sectors are among the most significant business fields where General AI is disrupting the current status quo. At Voicetechhub, we observe similar patterns from our clients' needs. The following industries we see most relevant for Gen. AI: Insurance, financial services, education, consumer markets/e-commerce, and healthcare. The most prevalent use cases involve enhancing productivity in IT systems and automating tasks to save daily efforts. Additional Facts: Generative AI in Finance and Accounting : AI is transforming the financial sector by automating trading, personal finance, fraud detection, and robo-advisors ( Boston ). E-commerce and Generative AI: AI helps e-commerce businesses with personalized recommendations, chatbots, and predictive sales analytics ( Businesswire ) Healthcare Generative AI: AI applications in healthcare include disease identification, drug discovery, and personalized treatment plans ( Walton , McKinsey ) Generative AI in the Insurance Sector: AI aids in automating claims processing, detecting fraud, and customizing policy pricing based on individual risk assessment ( McKinsey ).
sustainable generative AI
By Nina Habicht 31 Jul, 2023
This article is about sustainability goals with generative AI
What is prompt engineering?
By Nina Habicht 25 Jul, 2023
This blog explains prompt engineering, methods and best-practices for beginners
Show More

Need support with ChatGPT, Generative AI, Chatbots and Voicebots?

🚀 AI Strategy, business and tech support 

🚀 ChatGPT, Generative AI & Conversational AI (Chatbot)

🚀 Support with AI product development

🚀 AI Tools and Automation

Get in touch
Share by: