The History of Virtual Desktop Infrastructure
Introduction
The evolution of technology is a story of innovation built on the foundations of visionaries of the past.
Sir Isaac Newton once said,
“If I have seen further, it is by standing on the shoulders of giants.”
This sentiment perfectly captures the development of virtual desktop infrastructure (VDI); an industry that began with humble origins and has grown into a vital component of modern computing. From its early days in the 1960s to the sophisticated cloud-based systems of today, VDI has quietly revolutionized the way we work, interact with data, and connect across devices.
Introduction - Early Years & Pioneers
The history of virtual desktop infrastructure (VDI) begins not unlike many other major modern technological breakthroughs; with engineers in the Silicon Valley solving personal problems. The development of virtual desktop infrastructure (VDI) can trace its roots back to early technological innovations in the 1960s, pioneered by visionaries like Douglas Engelbart and Jim Rymarczyk, who laid the foundation for modern virtualization systems.
The first iteration of VDI can be credited to Professor Douglas Engelbart who, in 1968, showcased the real-time collaborative computer system NLS (oN-Line System) at a San Francisco conference. This demonstration featured a remote collaboration between Engelbart and a colleague in Menlo Park, introducing the first public computer video conference and remote desktop.1
This was a historic moment that marked a major leap in the evolution of computer technology and remote collaboration. The demonstration of the NLS system introduced real-time collaboration and also the first public video conference, foreshadowing many of the essential digital communication tools we use today.
While a technological breakthrough, Engelbart wasn’t the true pioneer of virtualization. This honor can be bestowed upon Jim Rymarczyk, who in 1968 joined IBM and became a key figure in virtualization's early days. Rymarczyk worked on experimental time-sharing systems, which became the foundation of subsequent virtual machines.
Initially, James focused on time-sharing systems that allowed multiple users to access and share a single computer simultaneously, optimizing resources and reducing costs. However, the problem was that time-sharing systems struggled with performance limitations, as the system’s processing power had to be equally divided among multiple users, which could lead to slowdowns during peak usage. Additionally, these systems had security risks, as multiple users accessing the same machine could lead to vulnerabilities and data isolation challenges.
This led to the virtualization of the hardware, introduced in the ‘70s with the IBM CP-67 system. The primary benefit was that virtual machines were able to share the overall resources of the hardware, instead of having the resources split equally between all users. Moreover, there was better security because each user was running in a completely separate operating system. This led to enhanced reliability as no one user could crash the entire system; only their operating system.
Rymarczyk stated “CP-67 and its follow-ups launched the virtualization market, giving customers the ability to greatly increase hardware utilization by running many applications at once.” These early concepts IBM developed eventually served as the inspiration for VMware, which brought virtualization to x86 servers in 1999.4
The Introduction of On-Premises Virtual Desktop Infrastructure
In 1989 a former IBM developer named Ed Iacobucci founded Citrix in Richardson, Texas introducing the revolutionary thin client/server model that transformed how businesses accessed and managed software. He raised $3 million dollars and brought on his first 5 employees who were also former IBM developers.
At its inception, Citrix was actually called Citrus Systems Inc., with its carpets famously matching its name!6
Citrix's goal was to centralize computing power, allowing businesses to run applications on remote servers. The product allowed customers to connect to the same software and information simultaneously, regardless of their location or the type of computer they were using. With this model, users can operate older, less powerful computers—referred to as "thin" clients—while still experiencing the performance of more advanced machines, as the software processing occurs remotely on the server.
As Iacobucci explained to Business Week, “We take any application, deliver it to any device, over any network, through any bandwidth."
Citrix’s thin/client model became a major software trend in 1999, with customers like Sears, AT&T, Nestle, Roebuck and Chevron. On January 31, 2022, it was announced that Citrix had been acquired in a $16.5 billion merger by affiliates of Vista Equity Partners and Evergreen Coast Capital. Citrix and its founder Ed Iacobucci are recognized as pioneers of the thin client/server model, but the true breakthrough was with Vmware's meteoric rise.
The Rise of VMWare: Virtualization, Hypervisors & Connection Brokers
VMware was founded in 1998 in Palo Alto, California by Diane Greene, Mendel Rosenblum, Scott Devine, Ellen Wang, and Edouard Bugnion. At the time, the team was attempting to build a supercomputer at Stanford, with the goal of enabling multiple operating systems to run concurrently on a single machine, optimizing hardware resource usage. Shortly thereafter, in 1999, VMware 1.0 launched, initially with support for up to 2 GB of RAM; requiring clients to have a Pentium II 266MHz processor and 64 MB of memory.
VMware 1.0 was a major success, providing significant cost savings and efficiency improvements by allowing businesses to run different operating systems on a standard x86 machine. This was revolutionary, as it maximized hardware utilization. VMware 1.0 gained rapid adoption in enterprise environments for desktop and workstation use, but the team soon recognized a much larger opportunity in server virtualization.
To capitalize on this, VMware introduced ESX Server 1.0 in 2001, bringing hypervisor-based virtualization to servers. The introduction of the hypervisor was ultimately the key to virtualization. The hypervisor sits between the physical hardware and the virtual machines, allowing multiple operating systems (OSs) to run on the same physical server. The hypervisor isolates each VM, ensuring they don’t interfere with one another, even though they share the same underlying hardware.
Subsequently, the next big leap came with the release of ESX Server 1.5 in 2002, which introduced several critical enhancements that made it a game-changer for enterprise data centers.
ESX Server 1.5 offered significantly better scalability and resource utilization compared to ESX 1.0. Most notably, it supported up to 64 concurrent virtual machines (VMs), allowed for larger physical memory (up to 64 GB of RAM), and introduced advanced resource management features, such as improved disk I/O bandwidth control and more efficient memory management.
Additionally, ESX Server 1.5 added support for clustering VMs across or within systems, enabling high availability and further optimizing server consolidation. These advancements reduced the total cost of ownership (TCO) for enterprises by maximizing server resource usage and improving performance, which made it highly attractive for businesses seeking to virtualize their data centers. This marked a significant leap forward from VMware ESX 1.0, which had laid the foundation but lacked the same level of scalability and manageability.
With these innovations, VMware became a leader in server virtualization. However, despite mastering server virtualization, there was no concept of a connection broker at the time. Customers connected to dedicated virtual machines (often running Windows XP) using the RDP (Remote Desktop Protocol). This approach worked but was limited in scalability and functionality, as RDP primarily focused on individual connections to specific desktops.
The breakthrough came in 2005 with the introduction of the connection broker, which was unveiled at the VMworld conference. The connection broker dramatically improved the experience by providing IT administrators with greater control over multiple virtual machines, enabling better scalability and management of virtual desktops. Unlike RDP, which focused on individual remote sessions, the connection broker acted as an intermediary, simplifying user access and ensuring secure, efficient delivery of virtual desktops and applications in a VDI or remote desktop environment.
Subsequently, VMware introduced several key innovations that expanded its portfolio and further solidified its leadership in the virtualization space. In 2006, VMware ESXi became the industry standard for server virtualization, while vSphere (released in 2009) dominated enterprise data centers with features for high availability and workload management. VMware Horizon (launched in 2008) established VMware as a leader in the virtual desktop infrastructure (VDI) market. VMware vCloud (2010) helped enterprises transition to private and hybrid clouds, although it faced competition from AWS.
Subsequent innovations, such as VMware NSX (2013), which revolutionized network virtualization with micro-segmentation and security automation, and VMware Cloud on AWS (2017), which bridged the gap between on-premise environments and the public cloud, further established VMware’s dominance.
Finally, VMware Tanzu (2019) positioned the company within the cloud-native and Kubernetes ecosystem, helping enterprises modernize applications while leveraging their existing infrastructure.
These advancements led to EMC's acquisition of VMware in 2004 for $635 million, and eventually, EMC itself was acquired by Dell in 2015 for $67 billion, forever solidifying VMware’s place in the broader technology ecosystem.
Modern Era: The Introduction of the Cloud
Although Microsoft Azure was officially announced in October 2008 and became commercially available in 2010, its origins can be traced back to 2005 when Ray Ozzie, then Microsoft's Chief Technology Officer, wrote about the concept of a revolutionary cloud platform. Ozzie, an early advocate of Software as a Service (SaaS), outlined his vision in a famous 2005 internal email. He proposed a future where businesses could seamlessly and cost-effectively manage IT infrastructure via the cloud. Initially, Microsoft leadership, including CEO Steve Ballmer, resisted this shift to the cloud, fearing it would cannibalize core products like Windows and Office. However, Ballmer eventually embraced Ozzie's vision, leading to the announcement of Windows Azure at the 2008 Professional Developers Conference under the codename Project Red Dog.
With the launch of Windows Azure in 2010, Microsoft introduced a fully cloud-native platform, which was a pivotal shift from the traditional, on-premise virtual desktop infrastructure (VDI) solutions offered by Citrix and VMware. Unlike those platforms, which initially relied heavily on on-premise infrastructure, Azure allowed businesses to move their IT infrastructure to the cloud entirely. This was a game-changer for VDI, as Azure offered greater scalability, flexibility, and cost-efficiency. The ability to dynamically scale resources on demand without the need for heavy upfront investment in hardware distinguished Azure from the hybrid or on-prem models of VMware and Citrix.
A significant part of Azure's success was the cloud-first strategy that Microsoft adopted under Satya Nadella’s leadership. In 2014, Azure was rebranded from Windows Azure to Microsoft Azure, reflecting its growing versatility, especially with its ability to support open-source software and Linux-based systems, further expanding its reach. This broad compatibility enabled Azure to support multiple programming languages and frameworks, such as PHP, Java, and SQL, marking a considerable advancement over other VDI solutions, which were often more limited in scope and integration capabilities.
The biggest advantage Azure brought to the VDI space was its fully cloud-based infrastructure. While Citrix and VMware were known for their strong on-premise VDI solutions, Azure made it possible to eliminate the need for expensive, on-site hardware and data centers. Businesses could now leverage Azure’s global network of data centers to provide scalable, pay-as-you-go virtual desktops to employees anywhere in the world. This not only improved cost efficiency but also allowed organizations to adapt quickly to fluctuating workforce needs, such as remote work, without being constrained by physical server limitations.
In contrast, Citrix and VMware, though pioneers in virtualization, were initially tied to more complex, hardware-dependent models. While both companies eventually adopted cloud services, Azure had already positioned itself as the leader by offering a comprehensive ecosystem that integrated virtual desktops, cloud computing, AI, and storage—all within the same platform. Azure provided a seamless environment for businesses to run all their IT operations on one platform, something VMware and Citrix couldn’t match in their earlier stages.
Another critical aspect was Azure’s global accessibility. Thanks to its global data centers, users could access virtual desktops from anywhere with ease. This feature became essential for modern businesses that were embracing remote and hybrid work environments. By contrast, VMware and Citrix required more complex configurations to support global scaling and didn’t have Azure’s level of global reach and flexibility early on.
Azure also introduced advanced security features, including multi-factor authentication (MFA), role-based access control (RBAC), and encryption, ensuring that businesses could protect their virtual desktops and meet regulatory requirements more easily. While VMware and Citrix offered robust security, Azure's native integration of these cloud-based security measures streamlined the process and made cloud VDI even more appealing.
The biggest advancement Azure made that Citrix and VMware didn’t was its ability to offer a fully cloud-native infrastructure, removing the dependency on on-premise hardware while providing global scalability, cost-efficiency, and an integrated ecosystem of services. This flexibility, combined with superior security, global reach, and the ability to dynamically scale resources, positioned Azure as a leader in the cloud space. By 2022, Azure served over 722 million users, solidifying its role as a foundational platform for modern cloud services, with robust support for VDI among a wide array of enterprise solutions.11
The Emergence of Softdrive
While providers like Microsoft have championed a cloud-first approach, VDIs have primarily focused on IT priorities—centralized management and security—at the expense of end-user experience. Traditional virtual desktop infrastructure has long fallen short for users with high-performance needs, particularly those working with graphical applications. The result has been virtual desktops that struggle to meet the expectations of users, with issues like streaming latency, poor color fidelity, and limited usability.
For VDIs to succeed, they need to prioritize the end-user experience, delivering performance that feels as seamless as a local machine. Softdrive aims to redefine VDI by creating a solution that offers low latency, true color accuracy, and a user-friendly setup—all at an accessible cost. This approach is what virtual desktops should have been from the start: a tool that not only meets the needs of IT but genuinely empowers end-users.
The approach we took at Softdrive was to start with the end-user experience and work backwards. In 2018, we set out to revolutionize end-user computing by developing Softstream: remote desktop software designed to overcome the limitations of traditional VDI. Rather than relying on existing protocols, we created our own encoder and decoder, optimizing GPU to deliver faster performance. Recognizing that TCP-based latency remained an issue, we engineered a custom UDP protocol with added reliability through selective integration of TCP and other proprietary technologies.
To further enhance the user experience, Softdrive developed a suite of custom drivers for input devices, printers, webcams, and more, minimizing latency at every step. With Softstream, Softdrive has built a technology that delivers the responsiveness and performance end-users demand.
To address the high costs of cloud desktops on AWS and Azure, Softdrive created a virtualization stack designed for maximum cost-efficiency, while significantly improving end-user streaming through Softstream. We leveraged bare-metal servers to directly access hardware, optimizing compute resource for greater value. Softdrive customized the KVM hypervisor within our stack by starting with GPU optimization and later refining CPU performance. The result is a solution that significantly reduces costs compared to AWS and Azure, while delivering the high performance end users demand.
Although Softstream and Softvirt technologies represent substantial technological advancements in remote desktop software and virtualization, Softdrive recognized the need for a unifying solution to integrate these tools and make virtual desktops truly accessible. Our goal was to simplify implementation to the point where even a CEO could set it up with ease. This led to the development of Softnet, a framework designed to seamlessly bind these technologies together, enabling businesses to adopt and manage virtual desktops effortlessly.
The future of desktops is the cloud; however, this future is predicated on simplicity and the balance of high performance for the right cost. Softdrive is optimistic about this future, drawing parallels between streamed virtual computers and the data centers of today, with the role of electricity and power plants in the 20th century.
Before the introduction of the power plant every company would have to run all their power from a local onsite generator. Firms would have to purchase, set up, and maintain the often finicky machines, much like computers today. However, that was until the advent of a centralized power source or a powerplant that could deliver power on demand. Now no one in their right mind would think about getting a local generator. The same link can be drawn with computers and data centers. Data centers offer a centralized source of high computing power, just like power plants of the past did; perfect for streaming powerful computers.
Softdrive has refined the virtual desktop experience by addressing long-standing limitations in performance, cost, and accessibility. Through Softstream, we engineered a high-performance remote desktop solution optimized for GPU and low-latency streaming. With Softvirt, we developed a cost-effective virtualization stack using bare-metal servers to maximize efficiency and reduce expenses compared to major cloud providers like AWS and Azure. Finally, Softnet serves as the cohesive framework that integrates these technologies, making virtual desktops easy to deploy and manage—even for non-technical leaders. Together, these innovations position Softdrive as a forward-thinking leader, delivering virtual desktops that meet both IT and end-user needs, bridging the gap that traditional VDI solutions left open.
Footnotes
1: https://www.britannica.com/biography/Douglas-Engelbart
2: https://www.britannica.com/biography/Douglas-Engelbart
3: https://www.ithistory.org/honor-roll/mr-james-jim-rymarczyk
4: https://www.ithistory.org/honor-roll/mr-james-jim-rymarczyk
5: https://x.com/citrix/status/469632493277446144
6: https://redcircle.blog/2009/02/17/when-citrix-was-citrus/
7: https://redcircle.blog/2009/02/17/when-citrix-was-citrus/
8: https://redcircle.blog/2009/02/17/when-citrix-was-citrus/
9: https://redcircle.blog/2009/02/17/when-citrix-was-citrus/
10: https://www.flickr.com/photos/begley/2982010172/
11: https://techjury.net/blog/azure-statistics/