The Simplified Web and the Old Internet Still Exist

But they might be harder to find

created Nov 1, 2019

[This post is still being updated.]

It has been a while since I created a long, rambling, stream of conscience post. This is definitely one of those. If I don't dissolve much of the crap here, then I'll create a much smaller, more focused post.


A few days ago, Oct 29, 2019, the internet celebrated its 50th birthday. Early in 2019, the web celebrated its 30th birthday, although the birth of the web has at least three dates, but the point is that:

Even though we conflate the terms "internet" and "web," technically, they are dissimilar.

The internet is the network or the highway, and email and the web are applications or automobiles that run on the internet highway.

I wish that I had been maintaining a web page dedicated to storing links to stories, created over the past few years that bemoan the loss of the old internet and the old web.

I'll focus on the web the other applications that I use over the internet include email, FTP, SSH, and that's about it. Occasionally, I use Gopher (like a text-based web). I rarely use IRC (chat). Some people still use NTTP (Usenet).

Between 1970 and 1990, universities, government orgs, and some corporations used the internet. But the web in the 1990s popularized the internet. Even though email had existed for 20 years and people used electronic email systems on mainframes and miniframes at work, I think that few people used email at home prior to the web.

In the early to mid 1990s when the web got accepted by the general public, the masses discovered email too. In the mid-1990s, many people bought home personal computers to connect to the internet to surf the web and to email friends and family members. Then in the 1990s, some people discovered chat systems. And so on.

Prior to the web (and Gopher, which began around 1991), the internet was used to send and receive documents through email and FTP and maybe other applications. I don't remember what other internat-enabled applications existed prior to 1990 that allowed people to READ documents, stored on internal and external servers.

Before the web was invented, network-enabled document reading systems existed that were mostly limited to one machine or to machines networked within the same, local network. I'll have to re-read the hypertext book published in 1990 that did not mention the web, but it discussed the hypertext theories and applications that were created prior to 1990. It's a fascinating read. Many software companies were close to creating what became the web, but their systems were limited in some fashion.

TBL did not create the Hypertext Transport Protocol (http) out of thin air from a dream. Tim used prior art. He got ideas from the apps and ideas created over the previous 20 years-plus.

Tim simplified the user experience. He used the existing internet protocols. He wanted to connect servers stored all over the world. He created the web to focus on creating, updating, and reading documents. I think that Tim's first web browser permitted readers to create and update documents through the browser. But as the web grew in the early 1990s, browsers were designed for read-only.

But the focus was still on documents or web pages for reading. The image tag was added to the HTML spec. This and other formatting features made the web blow past Gopher in popularity. Gopher focused on plain text with linking. At one point, more Gopher sites existed than websites. But the masses like the increasingly fancy features of the web, and this was only the early 1990s.

Around 1993, the Common Gateway Interface (CGI) was added to the web, which permitted the web users to complete forms and have that information sent to the server for processing by server-side programs that were external of the web server. This enabled the web to be dynamic. And early dynamic web app was for shopping.

Even for users who simply wanted to read documents, the documents could be generated dynamically on the fly via CGI programs. The CGI programs could pull information from databases and package it up into a customized HTML page that got sent to users.

I created my first CGI program in November 1996, using the NCSA web server that I installed on an unused HP-UX machine at work. I love CGI and related technologies that have permitted me to create server-side web-based programs in many languages, including C, Perl, TCL, ASP, JSP, NodeJS, Lua, and maybe more.

Web browsers enabled desktop computers and laptops to become fancy dumb terminals. User sent information to the server by clicking links and completing forms. The server-side programs processed the information and returned results.

At work from the mid-1990s to the early aughts, I created many internal web apps that used CGI or something similar. I did not have to create desktop applications using Microsoft technologies. I could create server-side programs in C, Perl, TCL, etc. that ran on Linux, WindowsNT, and various Unix systems. I even installed the Lynx web browser on our Vax/VMS system.

As long as a web browser existed on the client, I could access the same web application on a VT320 terminal, Apple computer, Windows PC, Solaris desktop computer, etc., and I did do that.

But this web-based client-server setup has grown far more complex. Modern web browsers are built on massive codebases. A few years ago, Firefox supposedly contained over 40 million lines of code.

Do I need that much baggage to read this web page? Of course not. Lynx can read this web page. So can NetSurf, elinks, and links2 -g. I'm guessing that these simpler web browsers do not contain 40 million lines of code.

But the Chrome, Firefox, and Safari web browsers are required to much more today. Today's web is a long way from the document-only mindset of the early 1990s.

JavaScript got invented in 1993, which made client-side web usage even more dynamic. When combined with CGI or other server-side tech, then sophisticated applications get created. And that's great for internal usage within companies. It's great for apps that run in the "cloud."

Unfortunately, too many websites today that should be document based are built with technologies that are too complex, which creates slow, bloated, clunky web experiences even when accessing the websites over fast internet connections.

The media, especially newspaper websites, should be lightweight, document-based websites. Static websites (except for a means to require readers to have a subscription to access the content, but that's another argument.)

Media orgs create some of the most obnoxious web designs on the planet. Horrible websites. 500-word editorials require readers to download between 2 and 8 megabytes of crapware. It's shockingly bad, and unfortunately, this bloated web design trend is increasing.

Some websites are so awful to use that I give up. It makes no sense for JavaScript to be required for sites that are meant to be read. I'm not logging into NFL.com to complete my taxes, manage a project, create content, etc. I only want to read some text, like data. But that site is designed like an application, which makes the experience wretched.

Modern web design equals horribly designed websites. I'm talking about websites and not web apps. In web apps, I log into the app or site and do work or perform tasks. Then I leave. These web apps are a tool, like a claw hammer.

If I want to pull a small nail from the wall, I would use a claw hammer. Modern web design would explode dynamite.

If I want to warm a slice of bread, I would use a bare-bones toaster that does nothing but toast bread. Modern web design would use a flame-thrower.

If I want to plant lettuce plants in my raised bed garden, I would use my bare hands or a trowel. Modern web design would use the Gem of Egypt (a huge "power shovel" meant for strip mining).

Modern web design equals overkill. I use Fastmail. It uses modern web design well. Really well, in my opinion, for both large and small devices.

Some other applications are comfortable to use. The JavaScript is well done. The UI/UX designs stay in the background, allowing me to complete tasks easily.

Unfortunately, too many websites have been designed as web apps, probably for some kind of coolness factor and not for any utility, and these atrocious user experiences interfere with my intentions.

The Toledo Blade provides a website that makes it hard to be a subscriber. But I don't use their website, nor any of their digital offerings. I definitely don't pay for the print product. I pay for a digital subscription, but I created my own web app to read the blade. The Blade articles that display within my web app are lightweight and comfortable to read. That's utility. I explain this more in my post here.

The Blade's web offering for subscribers should be ad-free and function like https://text.npr.org. Newspaper websites should focus more on being useful, like Craigslist. But it seems that Newspapers websites are designed intentionally to be hostile to readers because the sites are trying to win some kind of art reward. Design without utility is art.

I shared more thoughts about the simpler web in this 2017-2018 post.

Sadly, more websites that have been redesigned in recent years have dropped progressive enhancement. The sites break without JavaScript. And these are NOT web applications. They are websites meant to be read.

But most of this decay is occurring with the corporate web. The web is much larger than that. A lot of people still maintain their own personal websites, but the open web and related concepts, such as the IndieWeb, get drowned out by Facebook, Twitter, Instagram, and Reddit, which are the silos.

Personal web publishing still exists. Call it blogging or something else, it doesn't matter. Regularly updated individual websites are still being maintained. They might be harder to find because so much crap exists in Google search results, and most internet-connected users spend a lot of their time on Facebook.

When individual names appear in web articles, those names are usually linked to their Twitter profiles, instead of their personal websites.

I'm not wanting people to stop using their favorite social media silo services. I wish that more people would ALSO maintain personal websites that used their own unique domain names. But this comes at a price, literally and figuratively through the admin tax. My wife does not want to be a programmer, designer, database administrator, sys admin, etc. She wants to create, update, and read content.

WordPress reduces the admin tax by enabling authors to create their web hosting accounts, buy their domain names, employ domain name mapping, and by making it relatively easy to choose display themes. Creating and updating content is also a decent experience with WordPress. The monetary costs are affordable for most, but even a small cost may be viewed too much when compared to the "free" silos.

For my wife's birthday in 2017, I bought her a domain name, and I established an account with Svbtle, which costs $6 a month. I mapped her domain name to her Svbtle account. I seeded her personal website with a few posts that I had created, but I worded the posts on my wife's website to read like she created the content. I wanted those posts to be demos and to encourage her to use the site. Svbtle accounts offer RSS feeds. Svbtle provides almost no user customization opportunities. Svbtle fanatically focuses on WRITING, which is great for people new to personal publishing.

I have created several CMS apps this decade. Some were multi-user and others were single-user web publishing apps. I could have set something up for my wife that was based upon one of my CMS apps, running on a Digital Ocean Droplet, but I wanted to test Svbtle for myself, and if my wife enjoyed personal publishing and wanted more features, then I would migrate her to one of my systems.

But alas, she has not used my nerdy gift.

Those silo services know how to mess with people's psychologies to convince people that they NEED to use social media. The silos know how to make their services addictive.

I have to be in the minority of a minority, since I don't use Facebook, Twitter, nor Instagram. I deleted my Facebook account in the summer of 2016. I deleted my test Twitter accounts this fall. My Instagram account still exists, but I have not used it since 2015. I used it for a year, across 2014-2015. I need to download the images and videos that I uploaded, assuming that's possible, and then delete my Instagram account. I have used Flickr since 2004 or 2005 to store images that I embed into my web posts that use systems that I created. But I also occasionally use my own image-hosting web app that I built and run on a Droplet.

It ain't easy to separate from every internet biz. This content is stored on a Digital Ocean server. It's not stored in a computer in my home, except for backup purposes. If I stop paying whoever hosts my domain name, and/or if I stop paying Digital Ocean for server hosting services, then sawv.org vanishes.

At least on a personal website, especially one where I have access to the server and what's installed, I have near 100 percent flexibility to create, manage, and display content however I desire. I could make every web page on my site look different. I love the options available to my way of hosting content. The flexibility and creativity is worth the admin tax.

I cannot accept giving my content to a silo that packages my activity to sell targeted ads. I dislike the fact that most if not all of the social media silos offer no customization. Everything looks the same.

Content, however, is what matters. Content is the interface. Content is the design. Maybe silo users have no interest in customized looks. They focus on the content.

My post:

I'm mixed about it. I like the unique looks of individual websites, provided the displays do not interfere with my reading experience. If the display of someone's website makes it hard for me to read, then I try to employ client-side tactics in my browser to make the personal website easier to read.

But I cannot help admiring the creative displays of many individual websites. They are fun. But it's the content that makes me return to their sites.

I believe that readers should be in control of how websites appear in their browsers. Typography should be determined by readers.

I prefer dark text on a light background and a large-ish font size, and that's how I have setup sawv.org. But a reader might prefer the opposite, such as light text on a dark background and a small font size.

So-called modern web browsers are huge in terms of application size and features, yet the web browsers offer little to no means to control how websites display to readers. Users need to install extensions. Firefox and Safari offer reader modes, but sometimes, websites are designed in a way that reader mode is unavailable, or some of the content is lost when switched to reader mode. That's probably due to a document-type of website being overly designed as a needless application.

Document websites, such as newspapers, should start with https://text.npr.org or http://motherfuckingwebsite.com and then progressively enhance upward, slightly, as long as it's useful.

Personal websites whose authors want to demonstrate their creative sides will not be so boring, but I'm not paying for content at personal websites, like I am with the Blade.

I don't want to see useless, whiz-bang UI/UX features at the Blade. I don't mind funky, creativity displayed on personal websites, but I have encountered some personal websites that were cool-looking, but the content was hard to access and read, and with JavaScript disabled, no content displayed. With those websites, I move on, never to return. That's okay. It's a big web.

A few of my related posts:

In late October 2019, I found this Oct 7, 2019 post, titled Web of Documents. Excerpts:

In 1960, Ted Nelson envisioned a web of documents.

It was called Xanadu. It was a grand, holistic vision: of documents that, once published, are available basically forever; of bidirectional links that could glue together not just documents, but parts thereof; of managing copyright and royalties. It was complex. And it never really came to fruition.

But thirty-one years later, another web of documents took off. A much more modest undertaking than Xanadu, with a simple markup language, simple protocol to retrieve the documents, unidirectional, ever-rotting links, and not much else. The World Wide Web. It was prototyped by one man in a few months. And then its popularity exploded.

As the WWW spread [in the 1990s], it grew features. Soon, it was not enough for the documents to contain just text: support for images was added. People wanted to customize the look of the documents, so HTML gained presentational markup abilities, eventually obsoleted by CSS. It was not enough to be able to view the menu of your local pizza store – people wanted to actually order a pizza [CGI 1993]: the need for sessions yielded cookies and non-idempotent HTTP methods. And people wanted the pages to be interactive, so they became scriptable [JavaScript 1993].

As much as I have enjoyed server-side programming, I have spent most of my web time reading. Many of those web page that I read were dynamically generated by CGI programs, such as my message board toledotalk.com that I ran for over 16 years from January 2003 to March 2019. Every thread was dynamically generated.

But I'm sure that I have read a lot of web pages that were static HTML files. When I began blogging in 2001, I managed my content by using a tool called Greymatter, created by Noah Grey. Greymatter was a Perl program that ran on a web server. Greymatter was a web-based static site generator.

With Greymatter, I could use a web browser to create and update web pages that were stored as static HTML files. It was a beautifully simple web pub app.

In 2016 after creating several web pub apps that used the dynamically generated concept along with caching pages with Memcached, I came full circle, and I created Wren, which I use at sawv.org. I did not know that a small, relatively new computer programming language exists called Wren, which looks interesting.

My Wren app is a web-based static site generator that contains fewer features than Greymatter, but Wren does only what I need. I even created Wren in Perl. I also created a similar web pub app called Sora, which I built with Lua.

The fewer the features, the less to learn, the easier to use, and with how I have created Wren, I have more flexibility.

With Wren and Sora, I have to create archive pages, tag pages, and even the homepages manually, if I desire. Obviously a homepage is nice, but most "blog" features are unnecessary to me. I enjoy having this flexibility and power.

The power comes from Wren and Sora permitting me to include custom CSS within the content page that gets parsed and included in the final HTML page. I can create custom templates, and within the content that I'm editing, I can choose the custom templates, including a template contains no CSS.

Back in 2016 when I was testing my Wren app and manually creating pages that were automatically created by my other web pub apps, I found Wren refreshing to use because of its freedom.

I rely on server-side programming concepts, such as FastCGI, to create static documents. But I could create the documents on a local machine and use git or FTP or something to transfer the files to a web server that my domain name points to. We don't need web servers to support external server-side programs for static document websites.

Maybe the web should have remained static, similar to Gopher all along. And if we wanted a dynamic, complicated, bloated web, then a new protocol should have been invented that ran over the internet.

The simplified, static document web could have existed at http/https while the complex, bloated web could have existed at bloat://complex.com

Amazon.com, TurboTax, YouTube, and hostile reading media sites would use bloat:// and a related bloat browser would be needed by users. What's one more app on a desktop or laptop computer and on mobile devices?

The bloat browsers could have 40 million lines of code that supports increasingly complex technologies.

The useful web browsers would be similar to NetSurf and support HTML4 and CSS2 if that much. Even older HTML and CSS might be enough for a simple, useful, document-based web that still permits personal publishers to express their creativity.

Maybe the inventions of the Common Gateway Interface and JavaScript ruined the useful, document-based web.

Personally, I like the idea of simple web browsers, such as elinks, Lynx, NetSurf, links2, and other similar related web browsers supporting HTML forms that work with server-side programs.

If Gmail can exist in "old school" JavaScript-free, HTML mode, then sophisticated tools, such as tax prep and project management web apps, could do the same.

On larger screens, I use Gmail without JavaScript. Google calls it their "HTML view." I prefer this version because it's significantly faster than Google's bloated, modern web design version, which I find to be a horrible experience.

Email is a claw hammer tool. I'm not looking for art in my email apps. I can view the art that hangs on our walls. This is why Fastmail is my main email app. Fastmail does not work without JavaScript, but Fastmail does a much better job of making JavaScript useful instead of annoying.

Back to excerpting that post about a web of documents.

We don’t have a Web of Documents anymore.

I’ve been using the word “document” intuitively and vaguely so far, so let’s try to pinpoint it. I don’t have a precise definition in mind, but I’ll share some examples. A book is a document, to me. So is a picture, an illustrated text, a scientific paper, a MP3 song, or a video. By contrast, a page that lets you play Tetris isn’t. The essence of this distinction seems to be that documents have well-defined content that does not change between viewings and does not depend on the state of the outside world. A document is stateless. It exists in and of itself; it is its own microcosm. It may be experienced interactively, but only insofar as it enables the experiencer to focus their attention on the part of their own choosing; the potential state of that interaction is external to the document, not part of itself.

These days, the WWW is mostly a Web of Applications. An application is a broader concept: it can display text or images, but also lets you interact not just with itself, but with the world at large. And that’s all well and good, as long as you consciously intend these interactions to happen.

Can a web app exist for a user who uses the text-based Lynx web browser? I say yes, since the web has supported CGI since around 1993. CGI-based web apps that use only HTML and maybe CSS on the client side are obviously lighter in weight than client-side JavaScript single page applications or something similar that relies on megabytes of minified JavaScript.

And web apps that are primarily client-side JavaScript based may encounter issues with cross-browser compatibility while basic HTML forms and links and little to no CSS should have not trouble working across all web browsers.

The processing has been moved from server-side programs to client-side JavaScript, but client-side JavaScript still needs to make calls to server-side programs that work with data stores.

Has anything been saved or improved? Does a so-called "slick-looking" interface make the web app easier to develop, test, and deploy across multiple types of browsers? More importantly, does the slick interface make it easier for users to use the apps/sites? Do users expect slick interfaces that may or may not improve the users' UI/UX?

Are programmers and designers using big, client-side JavaScript frameworks because they are cool, or are the SPAs solving user design problems?

If users demanded fancy, client-side JavaScript interfaces, why is Craigslist still popular? Craigslist is a tool. It's a pitchfork when it's time to sling mulch. It's useful when needed.

Gmail's modern web look and the Blade's modern web look are garbage.

More from the web documents post.

A document is safe. A book is safe: it will not explode in your hands, it will not magically alter its contents tomorrow, and if it happens to be illegal to possess, it will not call the authorities to denounce you. You can implicitly trust a document by virtue of it being one. An application, not so much.

... the more I think about it, the more sense it makes to me to attack the problem closer to its root: to decomplect the notions of a document and an application; to keep the Web of Applications as it is, and to recreate a Web of Documents—either parallel to it, or as its sub-web.

The Web of Applications protocol (bloat://) should have been created at least 20 years ago. Except for bug and security fixes, the standards for HTTP, CGI, HTML, and CSS should have been frozen to what existed in 1997 to 1999. JavaScript should have been removed from web browsers in the late 1990s because the bloat:// protocol was being created. I'll need a time machine.

Java Applets and Flash were early warning signs of the forthcoming complex, bloated web where people were thinking that the web could be much more than documents and simple form processing.

I don't remember a a lot of fancy client-side JavaScript sites and apps in the mid to late 1990s, but they probably existed.

At work around 1997, I created a web-based data entry program demonstration purposes, using client-side JavaScript that sent user input to the server. The web server was called Livewire that was an expensive web server created by Netscape. We ran it on a Sun Solaris server.

In the late 1990s, Livewire supporter server-side JavaScript. In 1997, I thought that it was cool to use one programming language, JavaScript, to create client-side and server-side programs. I also did something similar with Java by creating the data entry tool as a Java Applet and user Java Server Pages on the server.

But from my foggy memory, the web apps that I used over the public internet were comprised mainly of HTML forms and server-side (CGI) programs. At work, that's how I created my web apps. I did not use client-side JavaScript between 1991 and 2002. I used HTML forms that made POSTs to CGI programs, and I used links that made GETs to CGI programs. I did view-sources on Yahoo's web pages because I liked how Yahoo displayed tabular info. I don't think that I used CSS in the mid to late 1990s. HTML and CGI. And I still do that 20-plus years later, except with a smidgen of CSS included. I use FastCGI today, but close enough to the past.

I'm writing this page, however, using my JavaScript "editor" called Tanager. Since 2001, most of my web-based creates and updates have been done with HTML textarea boxes. In recent years when I'm engaged in longer editing sessions, I use Tanager, which is a conduit or gateway. Tanager sends the markup to the server where server-side code creates the HTML. By default, Tanager sends markup to the server every five minutes. Tanager does not do much, only enough, and that makes Tanager useful to me.

I started using Textile markup in 2005 when I created a wiki app that eventually became the new code that powered my toledotalk.com message board. In 2013 when I created my Junco CMS, I added Markdown and MultiMarkdown support. I started using GitHub in 2013 too. Since around 2016, I have created all new content with Markdown. A couple months ago, I modified my Sora web pub app to use the CommonMark library. Sora does not support Textile nor MultiMarkdown. I require only a small amount of formatting options.

With all of the CMS apps that I have created, I included my own custom formatting commands, but I did not do that with Sora, and I might have only a couple minor custom commands in Wren.

As time marches on, each new CMS that I create contains fewer features. That's because I learned that I don't need much to create pages for the Web of Documents.

But I do like having the ability to create and update through a web browser.

More from the web documents post.

To do this, we need to take a step back. (Or do a clean start and invent a whole new technology, but this is unlikely to succeed). Fortunately, we don’t have to travel all the way back to 1992, when the WWW was still a Web of Documents. (I still remember table-based layouts and spacer gifs, and the very memory makes me shudder). I think we can base the new Web of Documents on ol’ trusty HTTP (or, better, HTTPS), HTML and CSS as we know them today, with just three restraints:

But do we need the advanced features found in HTML5 and CSS3? Some of the semantic tags in HTML5 could be useful, but do we really need them for a Web of Documents? We could follow the IndieWeb community and make good use of Microformats2.

More from the author:

No methods other than GET (and perhaps HEAD). POST, PUT, DELETE and friends just have no place in a world of documents.

Mmm. I assume that means no Common Gateway Interface support.

They are not idempotent; they potentially modify the state of the world, which documents should not be able to do. (I was also thinking “no forms”, but with #1 in place, it seems like an unnecessary refinement. After all, forms that translate to GET requests just facilitate creating URLs: a user could just as well have typed the resulting URL by hand.)

I don't understand that last sentence.

No scripts of any kind. Not JavaScript, not WebAssembly. Not even to enrich a document, such as syntax-highlight the code snippets. This one may seem too stringent, but I think it’s better to err on the safe side, and it’s very easy to enforce.

That's client-side programming. I agree with that. The author of that post never mentioned CGI and related tech directly.

No cookies. Cookies by themselves aren’t interactive, but having them makes it all too easy to abuse the semantics of HTTP to recreate sessions, and on top of them reinvent the app-wheel and eventually forfeit the Web of Documents again.

That would eliminate CGI-based web apps. I use cookies to maintain login sessions in my various CMS programs. Session IDs could be attached to GET requests, but that require dynamically-generated pages, which breaks the author's idea of a Web of Documents.

I'm going to assume no HTML forms for any reason, even search, and no server-side programming capability whether it be CGI or something else.

Again, there may be corner cases that may have escaped me. But if a WWW page conforms to these restrictions, I think it may be pretty safe to call it a “document” and make it a part of the Web of Documents.

How do we achieve this? I don’t know, really. I don’t have a concrete proposal. Perhaps we could have dedicated browsers for the WoD; ...

My summer 2019 post:

Related test website:

Since I don't have a Markdown-only web browser, I rely on Chrome and Firefox extensions. I use "Markdown Preview Plus" for Chrome and "Markdown Viewer Webext" for Firefox. Both extensions offer default themes to display the HTML, and both permit me to upload my own custom CSS. Readers should be able to control the typography.

With the Chrome extension, I can select .md, .txt, and .text files to be converted, if I desire, and provided the files are Markdown files. Markdown content accessed at md.soupmode.com, sawv.org, and daringfireball.net would all display the same, based upon my custom CSS that I uploaded to the extension.

I enjoy viewing the unique, fun, and quirky looks of personal websites, but I visit websites, mainly for the content. That means the content is the interface and most of the design is governed by typography. Web publishers won't appease all readers, regarding typography. That's why readers should have full control over how content is displayed on their devices.

For my md.soupmode.com test site, only the homepage is an HTML page, and it's automatically created by my Sora web-based static site generator that I modified for that test site NOT to create HTML for the article pages. The HTML home page is an info page to explain what's happening. For articles, I write in Markdown, and Sora saves the Markdown. That's it.

The above concept uses no HTML and no CSS on the server. I created a small CSS file that I uploaded to the Markdown viewer web browser extensions.

http://md.soupmode.com/my-css-used-in-markdown-viewer-browser-extensions.md

My md.soupmode.com test site takes the Web of Documents idea to an even more extreme, since I eliminated the usage of HTML on the sever. The Markdown browser or the Markdown viewer web browser extensions apply user-chosen style rules. Readers control the typography.

Obviously, non-tech users would not create CSS. In a real Markdown browser, the browser would offer forms to all users to choose typographical options via checkboxes, radio buttons, select-dropdowns, and input fields. It would be nice to have a default view that applies to all websites and site specific views. sawv.org would look one way while every other website looks a different way but the same way.

... perhaps we could make existing browsers prominently advertise to the user whether they are browsing a document or an application. On top of all the technical decisions to make, there’ll be significant campaigning and lobbying needed if the idea is ever to take off.

I like the spitballing but it's unlikely that much will occur outside of a few hobbyists.

Without CGI and forms, then I would have to create my Markdown files on a local computer with something like IA Writer that can save to Dropbox. Anyway, some kind of flow would need to be created that would allow an editor on a local computer to save to a cloud file hosting service and then that file would need copied to an S3 server or something similar to be displayed over the web.

What about search? I used my search function often at sawv.org. It's only available to me. If the files were stored on the same local computer, then I could search my computer.

The backups could be stored at Dropbox, GitHub, or something similar. Then the website could exist at S3 or elsewhere. Local backups could be made too to physical storage.

My intent in this article is to provide food for thought. All I ask from you, my reader, is consideration and attention. And if you got this far, chances are I got them. I’m grateful.

This page is a document. Thank you for reading it.

Good post.

More of my thoughts dumped into this April-May 2019 post.

Another Markdown-only test site of mine:

scaup.org is hosted an S3 bucket. I think that simplest thing to do is to edit the Markdown file locally and then use either the command line s3cmd tool to copy the file to the S3 bucket, or use the AWS web-based interface to upload the Markdown file. I can create "folders" through the AWS web interface too.

Anyway, no need to GitHub nor Dropbox. And if using the AWS web interface, no need for command line utilities. Then locally for a backup, I can save the files to a thumb drive or if I do use Dropbox, the files would be backed up to Dropbox, which also costs a fee, I think.

Despite what is shown on the scaup.org/home.md page, these are the only Markdown files that exist in that S3 bucket.

http://scaup.org/my-css-used-in-markdown-viewer-browser-extensions.md

http://scaup.org/2019/06/24/the-plastic-conundrum.md

http://scaup.org/2010/09/24/observing-lakeshore-dragons.md


October 2019:

-30-