"... we shouldn't have tortured a document transfer system ..."
A Hacker News thread from yesterday ...
... pointed to ...
The HN thread contained over 200 comments. Here's a comment from today.
I don't think the internet is well served by a monoculture of user agents all speaking HTTPS (and only HTTPS) and trying to cram everything under the sun into HTML/CSS/JS.
My personal preference would be for more specialized agents speaking protocols designed for their use case: email over SMTP/IMAP (or JMAP!), newsreaders with RSS, chat on XMPP, etc, content browsers with some limited markup language, maybe games and fat apps loaded as binaries to a sandbox VM.
Maybe this is unrealistic and that time is passed, but the internet used to work more like what I've described. Today, more and more things are just a JS webapp that only runs properly in Chrome.
I agree that it's too late to separate the bloated web from the simpler, useful web. The time to create a bloat:// protocol and bloat client app was 20 years ago.
The Toledo Blade website does not work in an updated Firefox on my Linux computer. I cannot figure it out. Even with all protective shields lowered, I cannot read an article on the Blade website with Firefox. When I try, I receive a message via a third party service contracted by the Blade that claims that I'm using an ad blocker.
A couple days ago, I tried to LOG INTO the Blade with my USER account, and that did not work either with Firefox. I had to use the Brave browser on my Linux computer.
But also in the mid to late 1990s, I created a lot of so-called web apps that only used simple HTML forms on the client-side and server side programming via the Common Gateway Interface on the server. All of the form validation, data reads and writes, and data crunching occurred on the server with the CGI code dynamically creating HTML to be sent back to the user.
I did this CGI/HTML programming for internal applications at work. It was easier to use the web browser as a thin client or dumb terminal that only needed to display HTML and simple HTML forms. I did not need to learn how to program for Windows.
The Links2 web browser supports HTML forms. The NetSurf web browser is a limited browser, but it's more graphical than Links2. NetSurf supports HTML 4.1 and maybe some aspects of HTML5, I'm unsure. Unlike Links2, NetSurf supports CSS through version 2, meaning that websites can look nicer in NetSurf.
My CGI-HTML based web applications were used heavily at work 20 years ago. In fact, one app that I created in March 1999 was still used at my former workplace into early last decade, like around 2012 or 2013, maybe later than that. The app lasted for at least 13 years. I created the command-line tool in a week in Java, and I created the web-based experienced around that command line tool in CGI using C (and HTML, of course), and that took me two weeks.
Within a month, we went from having nothing to having a tool that a department could test and then actually use for production immediately. I made mods throughout the year, but it simply ran and worked because it was a tool, focused on specific tasks. It helped a department do work faster and easier.
The department could have logged into a Linux box and edited files with Vi/Vim and submitted scripts to cron (batch), but the users were not Nix users. That's why I created the web front-end in C-CGI and HTML to make life easier for the users. They could use a web browser on their Windows computers to do the work. Simple. Thin client/server setup. Dumb terminal setup. The department used my web app to move files from the Vax/VMS to the Linux computer.
My command-line Java app processed flat ASCII files, and it accessed data from an Oracle database. But the web interface hid all of this from the users. The users clicked links, buttons, and completed simple HTML forms. When new users in the department needed to use the tool, it was easy for experienced users to teach the new people. I was not needed to instruct new people.
I worked as a sys-admin and not in the programming department. When I built work-related web apps, I did so mostly in my "spare" time, such as very early in the morning or well into the evening (at work) when my normal sys-admin duties were completed. I loved web programming. HTML in the client and CGI on the server.
With how web apps are created today, I could not help people today at work if I had to create the web apps with today's complexity. It seems complex on the client and the server now.
But when I create my own web projects today, I still use the simple techniques from 20-plus years ago. I use some CSS today. I still rely on simple HTML forms a lot. And on the server, I still use CGI or something similar. On change over the past several years is that I have created APIs in my CMS apps that can be accessed via REST (JSON over HTTP/HTTPS).
In that HN thread, a commenter said that he or she was building a web browser from scratch for personal use. I believe the person intends to create a web rendering engine.
While I agree that a lot of the W3C standards are silly and not worth implementing, I'm still determined to build a new browser from scratch.
I'm building a new OS and browser for myself. There is no "product" and the only "user" I care about is me. :)
I'm not trying to compete with anyone. I'm just building a browser (and operating system) for myself to use and have fun with. :)
Regardless, the main things I aim to do differently:
No JS JIT, only interpreter. I don’t care that much about JS performance as a user and the difference in complexity is astounding.
I prefer the web as a platform for smart documents, not for applications. I don’t want to relinquish any control of my computer to the web, and don’t want most desktop integrations like notifications, etc.
In that same vein, very light on animations/transitions, that type of stuff.
Way less aggressive memory caching. I don’t like the sight of 100MB+ web content processes that’s so common these days.
I'm unsure what a "smart document" is.
Another commenter said:
We haven't even perfected the Smart Documents yet, but browsers has abandoned that work and moved to Web Apps instead.
I need to move on. Yelling at cloud syndrome again.
An HN thread from today:
"Things you can do with a browser in 2020"
It's a back and forth discussion that involves the web for documents and the web as an operating system to execute sophisticated applications. The web is everything.
I like this HN comment:
It's almost like we shouldn't have tortured a document transfer system until it was able to run full-on applications.
Another HN comment:
The browser is such a great place as far as running an application goes almost universally.
The keyword is "almost". No need to ensure web apps run in all modern web browsers if most people, especially within a company, use Chrome.
ISWYM but I'd be much, much happier if the browser functionality of mainly delivering text and images was kept separate from the application functionality of ... doing bloody everything.
I want to turn off the application side of things for my safety (and I do), but too many sites require it unnecessarily to do the most basic tasks of displaying static text and pictures.
Here's an HN comment that makes zero sense:
The web was originally conceived as an interactive hypermedia platform, not a “document transfer system”.
WTF? That's false. Interactive hypermedia platform??? The original web could only display a slightly enhanced version of raw text. The original web, created by TBL, focused on hypertext and not hypermedia.
Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access
The early HTML could display headers, bullet points, and a few other functions. It was text. HTML pages linked to other HTML pages. It was not interactive nor dynamic. It was static and stateless.
The commenter continued:
It's true that it has displaced things they were true document or file transfer systems (Gopher, FTP) because it subsumes those functionalities, but it wasn't ever just that.
Man that person does not know web history. In early 1993, more Gopher sites existed than websites. Both the web and gopher consisted of text and links, but the web, thanks to HTML, had the ability to format and display pages a little better.
From early last year:
On March 12, 1989 Tim Berners-Lee launched the first internet browser, called the WorldWideWeb - later renamed Nexus. In recognition of its 30th birthday, a team at CERN in Switzerland embarked on a five day hackathon to recreate a working replica of the original browser for computing posterity.
On cranking up the nearly 40 year old machine, the team found that the NeXTcube had a number of web-browsers and pages already loaded onto the machine, but none were the original WorldWideWeb. The first part of the team’s job was to strip back the code and tags that came later, such as IMG in Mosaic while retaining pieces of the jigsaw such as HTTP, FTP, Gopher and NNTP.
The result is a fully functioning simulation of the WorldWideWeb, later called Nexus, available to use in modern browsers. Obviously, many of the original links created in 1989 are no longer there, however many of the team got a kick out of searching for current websites and seeing them appear in the inline text browser they’d just recreated.
Sir Tim Berners-Lee continues: ‘I didn't invent the hypertext link either. The idea of jumping from one document to another had been thought about by lots of people, including Vanevar Bush in 1945, and by Ted Nelson (who actually invented the word hypertext). Doug Engelbart in the 1960s made a great system just like WWW except that it just ran on one computer, as the internet hadn't been invented yet. Lots of hypertext systems had been made which just worked on one computer and didn't link all the way across the world. I just had to take the hypertext idea and connect it to the TCP and DNS ideas and - ta-da! - the WorldWideWeb.’
In summation, Remy Sharp of the CERN hackathon team adds: ‘I think there's a number of reasons to preserve these original applications, web pages and most importantly: ideas. The web has evolved over the last three decades and certainly in the web development world there's often talk about how complicated it is. This kind of project is a reminder that not only can websites be read perfectly well in the limited text only view that the WorldWideWeb browser offered, but it also worked on the original NeXT machine that was made some time in the late 80s.
‘The original ideas of the web were based around simplicity and accessibility for all, and by showing people what the browsing experience was like or showing them what the first web pages looked like, it helps refresh that message in the 21st century.’
Also from today at HN:
"CSS Zen Garden (csszengarden.com)"
tailwind.css got referenced a lot in the HN thread. Here's the tailwind.css author:
Anyway, that HN thread contains good back and forth discussions. Here's one simple thread:
I've seen people seriously recommend marking up your document like this:
<div style="display: table;"> <div style="display: table-row;"> <div style="display: table-cell;"> </div> </div> </div>
How in the nine fires of hell is that better than just using a table?
Once upon a time every website layout was defined via the Table element. Your side bar would be a cell spanning multiple rows, and inside the sidebar you’d have another nested table that defined more layout. The shift to table-less divs came almost as a massive over-correction to the abuse of nested tables.
It's a bit worse than that, even. Instead of the abuse of semantic elements, you don't have semantic elements at all. It's a giant mess of div tags. We've 100% given up on accessibility on the web. We've also thrown in the towel on device-agnostic HTML as well. For all of the grand plans of the '90s and W3C, they all failed. Every. Single. One. This includes CSS, which is an abomination.
If that weren't bad enough, let's discuss the fundamental core of the web: links.
Links are dead. They are buried under a mountain of SPA walled garden jank, teased at (but never granted) on Pinterest or Quora or LinkedIn or Facebook, or out there scamming users with shortened URL services. Corporate users are subjected to annual security training lessons where they are taught never to trust links (email, text message, or otherwise) because the alternative, training users to understand domain names, is just so impractical that we have all given up. Google gave up, with Chrome. They don't even want to show a domain. It's that bad and awful.
To give you an example of how shit it all really is, I recently had to look up an issue related to a Microsoft product. A quick Google search lead me to a number of Stackoverflow and blog posts on the issue. I found a number of links, pointing to Microsoft's bug tracker (links from just a year or two ago). And here's the kicker: all the links 404. The bug tracker is simply not there anymore. This is Microsoft we're talking about. They have given up on links. They have so given up on links that they can't even be bothered to let the user know what happened to those links.
Things are significantly worse on the mobile web. I would estimate that a good 90% of the web doesn't work on mobile in any sort of way that would delight a user. It's more that users just deal with this massive pile of shit. Watch someone use a mobile web browser sometime. Watch the nonsense they have to deal with. The janky-ass scroll popup sluggish ad-infested dumpster fire. All of it.
Reply to the janky-ass web experience on mobile devices:
Hence each website tries to create their own app.
Yep. Plus apps give site owners more control over the users.