Quote - Mar 19, 2020

"... we shouldn't have tortured a document transfer system ..."

A Hacker News thread from yesterday ...

https://news.ycombinator.com/item?id=22615894

... pointed to ...

https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html

The HN thread contained over 200 comments. Here's a comment from today.

I don't think the internet is well served by a monoculture of user agents all speaking HTTPS (and only HTTPS) and trying to cram everything under the sun into HTML/CSS/JS.

My personal preference would be for more specialized agents speaking protocols designed for their use case: email over SMTP/IMAP (or JMAP!), newsreaders with RSS, chat on XMPP, etc, content browsers with some limited markup language, maybe games and fat apps loaded as binaries to a sandbox VM.

Maybe this is unrealistic and that time is passed, but the internet used to work more like what I've described. Today, more and more things are just a JS webapp that only runs properly in Chrome.

I agree that it's too late to separate the bloated web from the simpler, useful web. The time to create a bloat:// protocol and bloat client app was 20 years ago.

The Toledo Blade website does not work in an updated Firefox on my Linux computer. I cannot figure it out. Even with all protective shields lowered, I cannot read an article on the Blade website with Firefox. When I try, I receive a message via a third party service contracted by the Blade that claims that I'm using an ad blocker.

A couple days ago, I tried to LOG INTO the Blade with my USER account, and that did not work either with Firefox. I had to use the Brave browser on my Linux computer.

Login forms are now complicated web apps. JavaScript and, apparently, a Chrome-like browser are needed to enter a username and password into text input fields and to click a submit button. Of course, since login forms are now complex web applications, that form junk is no longer simple HTML form functions.

Many websites require JavaScript to be enabled in the web browser to use the site's search function. JavaScript is required to type into the search text input field and to click the search submit button. Again, these are no longer simple HTML forms. The search field and button are now complex web applications.

One claim is that requiring JavaScript for login and search pages is that it prevents bots from hammering servers. I think that's a claim. Having JavaScript enabled implies that a human is using the service.

Anyway, back in the late 1990s and early aughts when Java Applets and Flash were somewhat popular, it was obvious that the simple web was heading towards a lot of bloat and complexity. The thin client or dumb terminal setup of mid-1990s web was disappearing fast by 2000 when Dynamic HTML via client-side JavaScript gained traction.

I created Java Applets in the late 1990s, and I enjoyed Java Applets. Most applets, however, were not for usage at work. I also used client-side JavaScript to create web apps in the late 1990s and early aughts, but again, this was for either demos of what we might do at work or for my own projects. That programming was fun, and I considered it the future of the web. Even in the late 1990s, I got excited about jamming more complexity into the web browsers.

But also in the mid to late 1990s, I created a lot of so-called web apps that only used simple HTML forms on the client-side and server side programming via the Common Gateway Interface on the server. All of the form validation, data reads and writes, and data crunching occurred on the server with the CGI code dynamically creating HTML to be sent back to the user.

I did this CGI/HTML programming for internal applications at work. It was easier to use the web browser as a thin client or dumb terminal that only needed to display HTML and simple HTML forms. I did not need to learn how to program for Windows.

I created CGI programs in C, TCL, and Perl Server Pages (within IIS). I created server-side code (CGI programs) on Linux, HP-UX, Windows NT, and maybe other OSes. I don't think that I used CSS. I did not use nor need client-side JavaScript for these web programs. I created these programs for work from late 1996 to about 2003. I considered it simple client/server programming.

The Links2 web browser supports HTML forms. The NetSurf web browser is a limited browser, but it's more graphical than Links2. NetSurf supports HTML 4.1 and maybe some aspects of HTML5, I'm unsure. Unlike Links2, NetSurf supports CSS through version 2, meaning that websites can look nicer in NetSurf.

The thin client web or lightweight web would be something like the NetSurf web browser with no client-side JavaScript and HTML 4.1 and CSS2. Do we really need HTML5 and CSS3 for a lightweight web? I don't think so.

My CGI-HTML based web applications were used heavily at work 20 years ago. In fact, one app that I created in March 1999 was still used at my former workplace into early last decade, like around 2012 or 2013, maybe later than that. The app lasted for at least 13 years. I created the command-line tool in a week in Java, and I created the web-based experienced around that command line tool in CGI using C (and HTML, of course), and that took me two weeks.

Within a month, we went from having nothing to having a tool that a department could test and then actually use for production immediately. I made mods throughout the year, but it simply ran and worked because it was a tool, focused on specific tasks. It helped a department do work faster and easier.

The department could have logged into a Linux box and edited files with Vi/Vim and submitted scripts to cron (batch), but the users were not Nix users. That's why I created the web front-end in C-CGI and HTML to make life easier for the users. They could use a web browser on their Windows computers to do the work. Simple. Thin client/server setup. Dumb terminal setup. The department used my web app to move files from the Vax/VMS to the Linux computer.

My command-line Java app processed flat ASCII files, and it accessed data from an Oracle database. But the web interface hid all of this from the users. The users clicked links, buttons, and completed simple HTML forms. When new users in the department needed to use the tool, it was easy for experienced users to teach the new people. I was not needed to instruct new people.

I worked as a sys-admin and not in the programming department. When I built work-related web apps, I did so mostly in my "spare" time, such as very early in the morning or well into the evening (at work) when my normal sys-admin duties were completed. I loved web programming. HTML in the client and CGI on the server.

With how web apps are created today, I could not help people today at work if I had to create the web apps with today's complexity. It seems complex on the client and the server now.

But when I create my own web projects today, I still use the simple techniques from 20-plus years ago. I use some CSS today. I still rely on simple HTML forms a lot. And on the server, I still use CGI or something similar. On change over the past several years is that I have created APIs in my CMS apps that can be accessed via REST (JSON over HTTP/HTTPS).

For my CMS apps, including the web-based static site generator that I use at sawv.org, I also created a JavaScript "editor" that is an option that I use when writing/editing a post for a long time. My JavaScript "editor" sends the markup to the server for processing, and then my JavaScript editor displays the HTML output to me. Editor is probably a misnomer. It's more of a gateway, sending JSON to the sever and receiving JSON from the server app.

My JavaScript editor, however, contains features that I like, such as auto-saving, by sending the markup to the server to be saved. The default auto-save interval is every five minutes, but I can change that within my JavaScript editor. My editor offers a split-screen, but it's not a live preview screen because I find that horribly distracting. The right screen displays the HTML, created by the server CMS. My editor can be switched to a single screen writing area, which is how I use it most of the time. I can switch to a dark theme writing area. It's simple and nice. FOR ME. I despise WYSIWYG editors. It's simple to learn and use. For me.

In my perfect web world, I could use NetSurf for most of my reading and creating at sawv.org and for reading most Web of Documents websites, but I couldn't use my JavaScript editor. I would need to use the bloat:// browser, but that would mean that my CMS would need to support the bloat:// protocol too. Mmmm.

In that HN thread, a commenter said that he or she was building a web browser from scratch for personal use. I believe the person intends to create a web rendering engine.

While I agree that a lot of the W3C standards are silly and not worth implementing, I'm still determined to build a new browser from scratch.

I'm building a new OS and browser for myself. There is no "product" and the only "user" I care about is me. :)

I'm not trying to compete with anyone. I'm just building a browser (and operating system) for myself to use and have fun with. :)

Regardless, the main things I aim to do differently:

No JS JIT, only interpreter. I don’t care that much about JS performance as a user and the difference in complexity is astounding.

I prefer the web as a platform for smart documents, not for applications. I don’t want to relinquish any control of my computer to the web, and don’t want most desktop integrations like notifications, etc.

In that same vein, very light on animations/transitions, that type of stuff.

Way less aggressive memory caching. I don’t like the sight of 100MB+ web content processes that’s so common these days.

I'm unsure what a "smart document" is.

Another commenter said:

We haven't even perfected the Smart Documents yet, but browsers has abandoned that work and moved to Web Apps instead.

I need to move on. Yelling at cloud syndrome again.

For my web reading, I can use the limited Links2 web browser and Firefox with security and privacy shields up (JavaScript disabled). I can use the Brave web browser for overly complex, bloated-ass websites, such as my bank's website/app, when I need to login and perform tasks.


An HN thread from today:

"Things you can do with a browser in 2020"

https://github.com/luruke/browser-2020

https://news.ycombinator.com/item?id=22630143

It's a back and forth discussion that involves the web for documents and the web as an operating system to execute sophisticated applications. The web is everything.

I like this HN comment:

It's almost like we shouldn't have tortured a document transfer system until it was able to run full-on applications.

Another HN comment:

The browser is such a great place as far as running an application goes almost universally.

The keyword is "almost". No need to ensure web apps run in all modern web browsers if most people, especially within a company, use Chrome.

Another comment:

ISWYM but I'd be much, much happier if the browser functionality of mainly delivering text and images was kept separate from the application functionality of ... doing bloody everything.

I want to turn off the application side of things for my safety (and I do), but too many sites require it unnecessarily to do the most basic tasks of displaying static text and pictures.

Here's an HN comment that makes zero sense:

The web was originally conceived as an interactive hypermedia platform, not a “document transfer system”.

WTF? That's false. Interactive hypermedia platform??? The original web could only display a slightly enhanced version of raw text. The original web, created by TBL, focused on hypertext and not hypermedia.

https://en.wikipedia.org/wiki/Hypertext

Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access

The early HTML could display headers, bullet points, and a few other functions. It was text. HTML pages linked to other HTML pages. It was not interactive nor dynamic. It was static and stateless.

The commenter continued:

It's true that it has displaced things they were true document or file transfer systems (Gopher, FTP) because it subsumes those functionalities, but it wasn't ever just that.

Man that person does not know web history. In early 1993, more Gopher sites existed than websites. Both the web and gopher consisted of text and links, but the web, thanks to HTML, had the ability to format and display pages a little better.

Then the web and HTML added images and the Common Gateway Interface. In 1993, JavaScript was invented. It did not take long for the web to surpass gopher, maybe by the end of 1993. The web did not start out as an interactive system, but it became one.

From early last year:

https://www.bcs.org/content-hub/cern-team-recreates-the-worldwideweb/

On March 12, 1989 Tim Berners-Lee launched the first internet browser, called the WorldWideWeb - later renamed Nexus. In recognition of its 30th birthday, a team at CERN in Switzerland embarked on a five day hackathon to recreate a working replica of the original browser for computing posterity.

On cranking up the nearly 40 year old machine, the team found that the NeXTcube had a number of web-browsers and pages already loaded onto the machine, but none were the original WorldWideWeb. The first part of the team’s job was to strip back the code and tags that came later, such as IMG in Mosaic while retaining pieces of the jigsaw such as HTTP, FTP, Gopher and NNTP.

The CERN team of nine and their roles in the project have been described by Jeremy Keith, a web developer and team member at CERN: ‘Remy [Sharp] did all the JavaScript for the recreated browser ...in just five days! Kimberly [Blessing] was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems. Of course, Mark [Boulton] wanted to make sure the font was as accurate as possible. He and Brian [Suda] went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts. John [Allsop] exhaustively documented UI patterns that Angela [Ricci] turned into marvellous HTML and CSS. Through it all, Craig [Mod] and Martin [Chiteri] put together the accompanying website.’

The result is a fully functioning simulation of the WorldWideWeb, later called Nexus, available to use in modern browsers. Obviously, many of the original links created in 1989 are no longer there, however many of the team got a kick out of searching for current websites and seeing them appear in the inline text browser they’d just recreated.

Sir Tim Berners-Lee continues: ‘I didn't invent the hypertext link either. The idea of jumping from one document to another had been thought about by lots of people, including Vanevar Bush in 1945, and by Ted Nelson (who actually invented the word hypertext). Doug Engelbart in the 1960s made a great system just like WWW except that it just ran on one computer, as the internet hadn't been invented yet. Lots of hypertext systems had been made which just worked on one computer and didn't link all the way across the world. I just had to take the hypertext idea and connect it to the TCP and DNS ideas and - ta-da! - the WorldWideWeb.’

In summation, Remy Sharp of the CERN hackathon team adds: ‘I think there's a number of reasons to preserve these original applications, web pages and most importantly: ideas. The web has evolved over the last three decades and certainly in the web development world there's often talk about how complicated it is. This kind of project is a reminder that not only can websites be read perfectly well in the limited text only view that the WorldWideWeb browser offered, but it also worked on the original NeXT machine that was made some time in the late 80s.

‘The original ideas of the web were based around simplicity and accessibility for all, and by showing people what the browsing experience was like or showing them what the first web pages looked like, it helps refresh that message in the 21st century.’

https://worldwideweb.cern.ch/

Here's the original web browser, recreated with JavaScript in 2019. This is not hypermedia.

https://worldwideweb.cern.ch/browser/


Also from today at HN:

"CSS Zen Garden (csszengarden.com)"

http://www.csszengarden.com

https://news.ycombinator.com/item?id=22627018

tailwind.css got referenced a lot in the HN thread. Here's the tailwind.css author:

https://adamwathan.me/css-utility-classes-and-separation-of-concerns/

Anyway, that HN thread contains good back and forth discussions. Here's one simple thread:

I've seen people seriously recommend marking up your document like this:

 <div style="display: table;">
    <div style="display: table-row;">
      <div style="display: table-cell;">
      </div>
    </div>
  </div>

How in the nine fires of hell is that better than just using a table?

Reply:

Once upon a time every website layout was defined via the Table element. Your side bar would be a cell spanning multiple rows, and inside the sidebar you’d have another nested table that defined more layout. The shift to table-less divs came almost as a massive over-correction to the abuse of nested tables.

We are in similar egregious territory now days with how we’re doing styled components directly in the javascript and totally forgoing some of the principles of CSS (we are effectively in-lining styles , for everything). That’s alright, we go through this upending in frontend every 3 years anyway.

Another reply:

It's a bit worse than that, even. Instead of the abuse of semantic elements, you don't have semantic elements at all. It's a giant mess of div tags. We've 100% given up on accessibility on the web. We've also thrown in the towel on device-agnostic HTML as well. For all of the grand plans of the '90s and W3C, they all failed. Every. Single. One. This includes CSS, which is an abomination.

If that weren't bad enough, let's discuss the fundamental core of the web: links.

Links are dead. They are buried under a mountain of SPA walled garden jank, teased at (but never granted) on Pinterest or Quora or LinkedIn or Facebook, or out there scamming users with shortened URL services. Corporate users are subjected to annual security training lessons where they are taught never to trust links (email, text message, or otherwise) because the alternative, training users to understand domain names, is just so impractical that we have all given up. Google gave up, with Chrome. They don't even want to show a domain. It's that bad and awful.

To give you an example of how shit it all really is, I recently had to look up an issue related to a Microsoft product. A quick Google search lead me to a number of Stackoverflow and blog posts on the issue. I found a number of links, pointing to Microsoft's bug tracker (links from just a year or two ago). And here's the kicker: all the links 404. The bug tracker is simply not there anymore. This is Microsoft we're talking about. They have given up on links. They have so given up on links that they can't even be bothered to let the user know what happened to those links.

Things are significantly worse on the mobile web. I would estimate that a good 90% of the web doesn't work on mobile in any sort of way that would delight a user. It's more that users just deal with this massive pile of shit. Watch someone use a mobile web browser sometime. Watch the nonsense they have to deal with. The janky-ass scroll popup sluggish ad-infested dumpster fire. All of it.

Reply to the janky-ass web experience on mobile devices:

Hence each website tries to create their own app.

Yep. Plus apps give site owners more control over the users.

-30-