Check out the Next 50 GTM Startups List here!

Artificial Intelligence

Leveraging Generative AI for the B2B SaaS — All you Need to Know

Every company right now is thinking through what their Generative AI strategy is.

Either you are an executive who is getting this question from the board and your advisors, or are a manager or individual contributor who is thinking through the real-world workflow implications of this new tech.

But it’s for good reason. Generative AI has exploded in popularity and mainstream awareness in the past four months, in large part to the launch of Chat-GPT back in November 2022. Few times have businesses been forced to evaluate and implement a strategy so quickly. Reminds me of the advent of the internet, when every company had to think about what their online strategy was, or the emergence of social media, when every company had to think about what their Facebook and Twitter strategy was and or even mobile when every company had to think about what their SMS strategy was.

2023 will be Generative AI’s year. Mark my words.

It’s high time this advanced technology, which has been decades in the making, starts to be applied to sales teams’ workflows to increase efficiency and productivity in various parts of the sales process.

But before we can look at the promise of where we are headed, we need to take a step back and look at how we arrived here.

In this article, we’ll explore the history of Generative AI from a broad technical evolution lens. This article benefits anyone wanting exposure to the broader backstory of how we arrived at the Chat-GPT pandemonium of today.

Late 1970s – Early Days of Neural Nets

This story starts back in the late 70s and early 80s. Researchers at the time were developing neural nets designed to mimic the structure of the human brain in order to process data in a way that was similar to how humans do. The idea behind this technology was to assemble a set of neurons, each of which was connected to a set of other neurons. That way they could pass information from one to another with some very basic logic and together the network of neurons could perform complicated tasks. Although it was very primitive in its imitation of the human brain, amazingly this architecture has stayed intact through today.

Two canonical examples of AI have traditionally been speech recognition and image recognition:

  • Speech recognition is converting human speech to text
  • Image recognition involves identifying objects inside images

From the very early days, neural networks were used for both of these tasks (although for image recognition, humans had to perform a lot of gymnastics in code, such as removing the background of images, identifying the boundaries of images, and converting colors from one form to another). Identifying features inside an image, like eyes, nose, ears, etc. were painstakingly manually programmed, but it worked.

While minimal advances in both of these areas were made, neural networks and the associated improvements in speech recognition and image recognition remained a largely dormant field of research and development for roughly 20 years from the early 90s until 2010.

Early 2010s – Deep Neural Nets

In 2012, Google pioneered deep neural networks that added a lot more data, hardware and computer resources and intermediate layers.

Their breakthrough? They first used this to identify cats in YouTube videos.

The beauty of this method was that they did not have to do any of the coding gymnastics previously required with image processing. Instead, they just took the full image, passed it into the neural network and asked if it had a cat in it.

This breakthrough revived the field of neural networks and AI for good.

Research accelerated on the hardware side as well during this time. Both Google and NVIDIA invested heavily in specialized hardware to help with neural networks. For instance, in 2011, Apple launched SIRI, the first mass speech recognition application, which at the time was still rough around the edges but researchers could see that wouldn’t be for long. Similarly, Amazon and Google launched Alexa and Google Home, respectively. But this still had a long way to go before the technology’s full potential was reached for the public.

Mid 2010’s – DeepMind

In 2014, Google acquired DeepMind which built neural networks for playing games. DeepMind, with the investment from Google, built AlphaGo which went on to defeat all the top Go players.

→ The documentary on AlphaGo is definitely worth watching for both the human and AI angles.

One of the oldest board games to still be played to this day, Go is a highly strategic strategy board game in which the aim is to surround more territory than the opponent.

Why does this matter?

The big difference between playing Go and say playing Chess is that the number of next moves in Go is infinite compared to the number of next moves in Chess. Furthermore, unlike Chess, Go pieces are all weighted the same. So while you can easily assign “points” in Chess to get some semblance of good possible next moves, this is next to impossible in Go.

AlphaGo now had neural networks that could generate human-like candidate moves first. This was a pivotal moment as it was some of the first industrial applications of Generative AI using computers to generate candidate moves that looked like human moves. Until then, all the AI tasks had been on recognition (like image recognition and speech recognition) and not generating human-like outputs.

Given these rapid advances of DeepMind and Google the year prior, in 2015, OpenAI was created to democratize AI and set up as a non-profit. OpenAI wanted to ensure that the tech giants like Google and Facebook did not run away with the technology that was not yet available to the general public.

Late in 2018, Google responded and built an early version of a general-purpose pre-trained text generation model that could write human-like text. Until this point, computers could barely stitch two sentences together so this was another flag-planting moment for this tech, similar to the advancement seen in AlphaGo when we went from not being able to beat Go amateurs to beating the very top players.

All of these 2010 advancements by OpenAI and Google primed the world for the floodgates that were about to be opened.

Early 2020s – GPT-2 and GPT-3

In 2019, OpenAI improved on BERT by adding a lot more text and rebranded it to be called GPT-2 (short for Generative Pre-trained Transformer). At this point, Microsoft saw the potential mass application for this tech and invested $1B into OpenAI similar to how Google poured its resources into DeepMind five years prior. With this, OpenAI then created a for-profit organization.

In 2020, while in private Beta, OpenAI improved on GPT-2 and started releasing GPT-3. That is when it allowed other tech startups to build industrial applications on top of it with companies like Regie.ai getting early access to it.

Just a year later, in November 2021, GPT-3 was finally ready for prime time. OpenAI released GPT-3 to the public via an API (even though Microsoft had exclusive access to the models which were not open-sourced), putting them ahead of the field. Google, Facebook and others were caught flat-footed.

All of this momentum now brings us to a mere four months ago, when in November 2022, OpenAI released ChatGPT, an easy, one-button way for humans to talk to GPT-3, which does not involve any APIs. Rather, chats are saved as HTML files, in which GPT-3’s responses are displayed as images. While this may seem like a step back from the two previous interfaces, it was actually a monumental leap forward in terms of accessibility for the everyday person.

In four short months, this technology is already proving how it can help businesses across different sectors automate frequently asked queries and reduce human contact required for generating content-based outputs. However, as with all technology, it solves problems while creating new ones. A looming consideration is the social responsibility of us all to use it for good and not evil. To help empower the human, not replace it.

What’s next?

Like every disruptive piece of technology that has ever been introduced, all of us will soon adapt to the new normal and have a hard time imagining businesses without it.

The progress in this field has been at nothing short of a breakneck pace since 2012. It’s hard to believe that it has only been 10 years since the time humans started working on deep neural networks to now with the mass application and utilization of GPT-3. It is equally hard to believe that it has only been three years since the inception of GPT-3 and only 12 weeks since ChatGPT was released.

ChatGPT has truly been the Trojan Horse to accelerating the awareness and accessibility of this technology, and it has tremendous application potential when it comes to modernizing revenue teams. But that’s more for another day.


Srinath Sridhar, CEO & Co-founder, Regie.ai, is an experienced veteran in the tech industry with over 15 years spent leading high-performance Engineering teams for some of the most notable tech companies. He was part of the early 100-person Engineering team at Facebook and was a Founding Engineer at BloomReach. While there he worked on their innovative Search and Recommendation engine that has influenced the entire search and recommendation ecosystem we utilize today. Srinath also co-founded Onera, an AI Supply Chain startup, which was subsequently acquired by BloomReach.