Skip to main content
  1. Posts/

Making sense of the evolution of the Internet

·11 mins

[Subtitle] An exploration of the “why” behind the “what lead to what”

•    •    •

This post is a second of the series of posts on evolution of web technologies

Part 1 : Precursor to the WWW, that very few know of

Part 2 : Making sense of the evolution of the Internet

Part 3 : Which STACK do you pay your allegiance to?

Part 4 : Angular vs React vs Vue (Work in Progress)

•    •    •

Context #

When Tim Berners-Lee submitted his proposal for the WWW (and its precursor that detailed the problems at CERN) in 1989-1990, the main goal he had in mind was to create an interconnected web of information / documents that can capture and represent the web of context, so that (new) participants can explore and “get it” using digital computers.

Conceptually, his WWW was a new mechanism to deliver documents and their larger (constantly growing) context digitally. A watershed moment manifesting that vision, was the creation of Mosaic by Marc Andreessen in 1993, the first graphical user interface (GUI) based browser, which made the web more accessible and user-friendly for people without technical expertise.

On the surface, WWW as manifested through web browsers was like …

Early web browsers were more like a printer that would deliver the documents on the screen. The significant achievement of the web was document delivery from a server far away, managed by someone else, and the ability to navigate across documents via hyperlinks. 

– Atishay Jain, HUGO in Action.

That’s right, it was all about interconnecting and rendering static content.

•    •    •

Addressing the need for Robust Styling #

In early days, styling was done directly within HTML elements using various tags and attributes. This meant that, as webpages grew in size, one had to add a lot of repetitive styling and also small mistakes led to inconsistencies across the website.

Håkon Wium Lie, who in 1994 was working with Tim Berners-Lee at CERN, created Cascading Style Sheets (CSS) to solve the above problems, by allowing web developers to create separate stylesheets that could control the appearance of multiple pages, leading to more manageable and accessible content.

Need for Dynamic Content and Interactivity #

As the web grew, need for dynamic web pages was fueled by use cases like the following.

  1. Ability to frequently update website content without manually/directly editing of HTML files led to the development of Content Management Systems (CMS), thus enabling non-technical personnel to push real-time updates to web content.

  2. Ability to collect information using online forms, and hence the need to interact with databases - birthing functionalities like user registrations, feedback forms, guestbooks, forums, polls etc.

  3. Need for showing Real-time information (such as stock tickers, weather updates, news feeds, or sports scores etc.) necessitating to fetch and display updated information without requiring the user to manually refresh the page.

Addressing the need for Interactivity (client-side innovation) #

As web pages were initially static, any interactivity had to be implemented on server-side, which was less efficient and resulted in poor user experience.

Marc Andreessen, after having worked on Mosaic, dropped out of his undergraduate studies at Univ of Illinois at Urbana Champaign and moved to California to partner with Jim Clark (founder of SGI, Silicon Graphics, Inc.) and founded Netscape in 1994.

Netscape became the dominant web browser of the time and introduced many innovative features like cookies (for maintaining user sessions), secure sockets layer (aka SSL for encrypted communication), and frames (for displaying multiple parts of a web page simultaneously).

At Netscape, Brendan Eich was working on enabling “scripting” to run in the browser for client-side interactivity without the need to constantly communicate with the server and his work led to creation of JavaScript in 1995.

DHTML emerged as a term to describe the combination of HTML, CSS, and JavaScript to create interactive and dynamic web content. It allowed for client-side scripting to manipulate the DOM (Document Object Model), enabling actions like changing styles, content, and layout without reloading the page, so much so that World Wide Web Consortium (W3C) standardized the DOM by late 1990s.

Then came XML (eXtensible Markup Language) in 1998 as a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. It became widely used for the exchange of data over the internet.

Along with XML came, XSLT (eXtensible Stylesheet Language Transformations), a language for transforming XML documents into other XML documents or other formats such as HTML, was also developed. This allowed for more dynamic rendering of web content from XML data sources.

During this period, there was a growing emphasis on web standards led by the W3C, aiming to ensure consistent behavior across different web browsers and platforms.

CSS3, introduced in modules starting from around 1999 and onward, brought in a lot of new capabilities allowing for more sophisticated and visually appealing web designs without relying on images or third-party plugins.

Then came Ajax (Asynchronous JavaScript and XML) around 2005 that allowed web pages to be updated asynchronously by exchanging small amounts of data with the server behind the scenes. This means that it is possible to update parts of a web page, without reloading the whole page. Google Maps is one of the most notable early examples of a web application that uses Ajax.

Released in 2006, jQuery was a fast, small, and feature-rich JavaScript library, to make operations like HTML document traversal and manipulation, event handling, and animation much simpler with an easy-to-use API that works across a multitude of browsers. jQuery played a significant role in abstracting complex tasks and contributed greatly to the ease of web development.

In about 2010, as mobile devices started becoming ubiquitous, the concept of Responsive Web Design (RWD) introduced by Ethan Marcotte became popular and was pivotal in evolution of features that made the web accessible on a vast array of devices with different screen sizes by seamlessly detecting the visitor’s screen size and orientation and change the layout accordingly.

Officially finalized in 2014, HTML5 introduced new elements and APIs that allowed for the creation of more complex and better-performing web applications. It brought significant improvements over its predecessor, including new semantic elements, embedded video and audio elements without the need for external plugins, and enhanced forms, among others.

Addressing the need for Dynamic Content (server-side innovation) #

One of the earliest attempts to address the need for dynamic content was the introduction of Common Gateway Interface (CGI) protocol in 1993.

While CGI itself is not a server, it defined a standard method for web servers to interface with executable programs and server-side scripting (using languages like PHP, Python, Perl etc.) installed on a server to generate web pages dynamically. This capability was a significant step toward creation of dynamic content and interactive web applications, based on user input or other data.

A clear distinction emerged between webservers (servers focused on serving web pages) and application-servers (servers that interacted with databases and other executable programs/services).

Over time, traditional CGI paved the path to “FastCGI”, which tackled the problems of resource utilization and performance overhead caused by starting a new process for each request, by managing a pool of processes kept alive to handle multiple requests.

Then came full-fledged application servers and frameworks that provided more efficient ways to build and deploy dynamic web applications. These servers often came with a plethora of features out of the box, such as session management, templating, and database connectivity, which were cumbersome to implement with CGI alone. Examples include Java EE servers (like Apache Tomcat and IBM WebSphere), Microsoft’s ASP.NET on IIS, and the Ruby on Rails framework.

Technologies like the Apache JServ Protocol (AJP) and the WebSocket protocol emerged to facilitate more efficient communication between web servers and application servers or services, bypassing the need for CGI’s process model. These protocols allow for persistent connections and more efficient data exchange, suitable for modern web applications’ needs.

Then came the concept of serverless computing and Functions as a Service aka FaaS (e.g., AWS Lambda, Azure Functions, Google Cloud Functions etc.) abstracts the idea of running code in response to requests even further. In these models, the cloud provider dynamically manages the allocation of machine resources at the granularity of a “function”. This can be seen as an evolution of CGI’s goal to dynamically generate responses to web requests, but at a much more scalable and efficient level.

•    •    •

Divergence of Server-Side & Client-Side technologies #

Given the evolution of technologies progressed to meet the needs of Server-Side and Client-Side, one can observe a certain divergence as specific programming languages/tech evolved to serve specific needs, which was understandable.

JavaScript, with its ability to manipulate HTML and CSS in real-time made it essential for adding interactivity and dynamic content to web pages, emerged as the de facto standard language for client-side scripting in web browsers.

On the server side, a variety of programming languages were used, each with its own frameworks and tools, including:

  • PHP: Widely used for server-side scripting, especially in the context of web development, due to its ease of use and built-in web development capabilities.

  • Java: Utilized in enterprise environments, Java was popular for its portability, performance, and extensive ecosystem, including servlets and JSP (JavaServer Pages) for web applications.

  • Python: Gained popularity for web development with frameworks like Django and Flask, appreciated for its readability and versatility.

  • Ruby: Became well-known with the Ruby on Rails framework, which emphasized convention over configuration and aimed to make web development more efficient and accessible.

  • .NET: A framework from Microsoft for building web applications in C# or VB.NET, offering robust tools for enterprise-level applications.

JavaScript gets amped-up and retrofitted with an V8 engine #

While server languages and technologies were very conscious of performance, JavaScript engines within browsers interpreted (translated) JavaScript code line-by-line to machine code at runtime, thus being inherently slower. It was adequate for the simpler, less dynamic web pages and applications of the early internet. However, as web applications became more complex and interactive, the limitations of JavaScript engines became more apparent.

Also, there was also significant variability in JavaScript performance across different browsers, as each had its own JavaScript engine with different capabilities and performance characteristics. This variability could make it challenging for developers to create consistently fast and responsive web applications.

To address this problem for its Chrome based browser, in 2008 Google introduced its revolutionary V8 JavaScript engine with Just-In-Time (JIT) compilation for JavaScript, compiling JavaScript directly into native machine code before executing it, which was significantly faster. Apart from JIT, V8 also introduced several other optimization techniques, resulting in a significant leap in JavaScript performance, making it possible to build more complex and interactive client-side web applications. This helped Google Chrome gain a significant browser market share and this was a key factor in the evolution of web applications towards the rich, dynamic experiences we see today.

V8’s success prompted other browser vendors to invest in improving their JavaScript engines, leading to a new era of competition that greatly benefited web performance. Engines like Mozilla’s SpiderMonkey, Apple’s JavaScript Core (Nitro), and Microsoft’s Chakra introduced their own JIT compilation and optimization techniques. However, with the introduction of the new Chromium-based Edge browser in January 2020, Microsoft shifted away from using Chakra for its main browser offerings.

With all the performance boosting retrofits, suddenly JavaScript was able to pace up and compete along with server-side technologies in terms of performance.

JavaScript revamps to dominate the server-side #

Before 2009, one of the major challenges with server-side development was with handling of high levels of concurrent connections and it often resulted in high resource usage. The then major server-side technologies - PHP, Java, .NET Framework, and Perl - had their own approaches to handling concurrency and blocking operations, often relying on multi-threading or process-per-request models, which although effective for many use cases, could lead to inefficiencies under heavy load.

In contrast, Ruby (with its EventMachine library) and Python (with its Twisted library), enabled their developers with event-driven, non-blocking IO to build fast and scalable network applications.

In 2009, inspired by EventMachine in Ruby and Twisted in Python, Ryan Dahl built a non-blocking event-driven architecture-based system using JavaScript and leveraging the capabilities of Google’s V8 engine, which he named node.js. It was particularly suited for building scalable network applications, and thus quickly gained popularity, enabling developers to use JavaScript on the server side and leading to the development of a vast ecosystem of modules and tools.

Given the undeniable dominance of JavaScript on client-side development, there was a large talent pool with expertise in JavaScript. The introduction of node.js made it easier for organizations to adopt JavaScript for server-side development without the need to train or hire developers with expertise in a different server-side language.

This suddenly opened the floodgates for client-side developers to expand their repertoire to become full-stack developers, the ones who have the knowledge and skills to work on both the front-end and back-end parts of web applications.

· · ────────────── ·𖥸· ────────────── · ·