For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more marbu's commentsregister

One way to look at this code is as a quick prototype to get the idea into real thing to play with. And to appreciate that, one have to realize that the original idea included both reading and editing of web pages easily in the same client in WYSIWYG fashion.

See https://www.w3.org/People/Berners-Lee/WorldWideWeb.html for context:

I wrote the program using a NeXT computer. This had the advantage that there were some great tools available -it was a great computing environment in general. In fact, I could do in a couple of months what would take more like a year on other platforms, because on the NeXT, a lot of it was done for me already. There was an application builder to make all the menus as quickly as you could dream them up. there were all the software parts to make a wysiwyg (what you see is what you get - in other words direct manipulation of text on screen as on the printed - or browsed page) word processor. I just had to add hypertext, (by subclassing the Text object)


This is likely url of XFIND gateway[1,2] which was basically a first web service making information from XFIND information system available via web. It was already in operational/demo-able state along with WorldWideWeb browser/editor (NeXT), Line mode browser (dumb command line client) in early stage of the web in the end of 1990. This is because gateways like these were crucial for the web to take of in the particle physics scientific community in the first years of the project.

This may seem obvious and boring now, but back then, it made a real difference (copy pasting a section from my old post [3]):

... a physicist from German particle physics lab DESY who get used to look up information via XFIND at CERN, but using it from DESY was bit clumsy. First of all he had to telnet to CERN, then login to IBM CERNVM machine, then start XFIND there and then finally place his query to XFIND. Moreover as the connection was slow an unstable, one have to repeat this procedure again in case of a network failure. Compared to this using Line Mode Browser from DESY to directly access XFIND Gateway at CERN was a big improvement, which helped the web to spread to DESY.

[1] https://www.w3.org/Talks/FINDGateway.html [2] example http://www.dnp.fmph.uniba.sk/cernlib/asdoc/fatmen_html3/node... [3] https://blog.marbu.eu/posts/2023-04-29-the-first-web-browser...


> As some cars gain more autonomy, it's probably helpful if they have a way to signal their intentions to other road users.

My first though was exactly that, this looks useful. That said, as I'm thinking about it bit more, it's not easy for me to come up with nice examples of situations when this signal is actionable, clear and useful. I doubt most people understand what level 3 autonomous driving means nor what to expect from a car driving in this mode in a particular situation, esp. when there will be differences across car manufacturers.


I would say that the web become popular way before significant malicious actors adapted to the web arrived. The web stated as an open system available to be used for practical use cases, and grown exponentially from there.


I just looked this up again in book "How the Web was Born"

https://books.google.com/books/about/How_the_Web_was_Born.ht...

And even though there are multiple people requested, the CERN management approved just 2 full time people to work on the project (Tim Berners-Lee and Robert Cailliau), and on top of that assumed work of Nicola Pellow (a summer internship) and allowed Bernd Pollermann to spend some (limited) time to work on FIND gateway. So nobody actually worked as Hyper-Librarian or as X-windows and human interface engineer during early stages of the web at CERN. Which is why it would be hard to assign names to "RJ" or "KG".


> It envisioned some kind of a collaborative web where readers were also publishers, but it didn't go into much detail. AFAIK this phase was never completed, and IMO this is a major reason why the web is so centralized today, why users have no control of their data, and why it's primarily aimed towards consumers.

> Had publishing content been as easy as consuming it was from the start, there would've been more tooling built around this concept ..

I have red book "How the Web was Born" by James Gillies, R. Cailliau, and the reason why the original vision didn't fly is bit more complicated.

https://books.google.com/books/about/How_the_Web_was_Born.ht...

The thing is that the editing/publishing part was in fact part of the fist prototype from the beginning (without any way to edit remote resources though)! But to make the system usable early for as many users as possible, this feature was not included in the dumb terminal or vt100 browser, or any other browsers which followed it.

Btw some time ago here on hn, I run into a person who was among the early users of the dumb terminal browser (aka line mode browser), but he didn't know much about the 1st prototype and it's editing feature.

And I would say that it would be very difficult to figure out how to make the original vision or browser/editor fly while at the same time trying to make the early web useful to early users and on top of that driving adoption as fast as possible. The CERN invested money into this proposal expecting some outcome, not a 10 years of vaporware (like some other hypermedia systems ended up, ehm). Moreover if Tim and Robert weren't able to focus on demonstrating that their proposal is practical (remember that while you find their vision familiar and understandable, it was not the case back then), it could have easily failed early on, and we could have ended up with a proprietary closed system(s) from the start.

Btw I wrote a blog post about this very topic some time ago, if you are interested in more details or would like to see additional references from this period, you can have a look:

https://blog.marbu.eu/posts/2023-04-29-the-first-web-browser...


I reported about 5 such ads just this moth, all clear financial scams impersonating well known people and companies in Czech republic (where I live), only to be told that youtube checked my claim and that the add in question doesn't break any youtube policy.

Obviously nothing is forcing Google to deal with this in any way. But I wonder how could that work out for Google in the long run.


The more legitimate reasons for adblockers (such as "I don't want to risk falling for scams."), the worse their anti-adblocker efforts look.


The same thing happened to me. Ads from Kazakhstan impersonating a Czech state-owned energy company etc. And almost every day there is an article in the news about how older people have been caught and lost their savings.


In my case, it was either from Kazakhstan or the US.


So Google is then knowingly participating in financial scams? Looks like grounds for a lawsuit.


Google is not alone in promoting such scams and being complicit of crime. The law doesn't apply to big companies though, so they can keep doing so and profiting off it.


DMCA takedowns are proof that law applies to big companies too. Unfortunately, they only respond to lawsuits it seems.

Victims of these scams should sue Google, Meta and any other big company knowingly participating in these kind of scams.


> Obviously nothing is forcing Google to deal with this in any way

Really? I mean, they're getting paid by a scammer who uses provably fake and deceptive content to prey on its victims; they have been alerted to the situation, they claim they reviewed it, and that they think it's fine. What could go wrong?


Yeah. This is why I doubt it's a good strategy for Google in the long term. Sooner or later, someone will be finally pissed off enough to go after this practice (either a government or another big US company).

That said, there seems to be no legal way for a big Czech company to go neither against Google or the scammers, otherwise this would have been already resolved. CEZ (one of the companies being impersonated by the scammers here) made a press release about this almost 2 years ago (references are in Czech):

https://www.cez.cz/cs/pro-media/tiskove-zpravy/klamave-rekla... https://www.cez.cz/cs/podvodna-reklama


Thanks. I still don't understand how is it possible that suing youtube doesn't work after having reported the ads and having been told that they're fine. You basically have a written statement from the company that incriminates them.


Seems like the only policy is that it makes money. If it does then everything's ok.


> 1. <figure> and <figcaption>

Pandoc will generate html code using these elements when you use implicit_figures feature:

https://pandoc.org/MANUAL.html#extension-implicit_figures

And it seems to be well supported in web browsers:

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/fi...

You can see an example how it looks like in this post from my blog, there are no css tweaks for figure or it's caption (I use static site generator based on pandoc):

https://blog.marbu.eu/posts/2023-04-29-the-first-web-browser...

And personally I find that better compared to alternative solution consisting of multiple div elements.


I'm sorry for what I'm going to tell you, but this is exactly the sort of wrong use of the <figure> tag I mentioned about. In fact, your blog contains the PERFECT example of WRONG use of the <figure> tag, which illustrates why <figure> semantics are a lost game at this point.

The spec explicitly notes:[1]

  >When a figure is referred to from the main content of the document by identifying it by its caption (e.g., by figure number), it enables such content to be easily moved away from that primary content, e.g., to the side of the page, to dedicated pages, or to an appendix, without affecting the flow of the document.
  >
  >If a figure element is referenced by its relative position, e.g., "in the photograph above" or "as the next figure shows", then moving the figure would disrupt the page's meaning. Authors are encouraged to consider using labels to refer to figures, rather than using such relative references, so that the page can easily be restyled without affecting the page's meaning.
The point of <figure> is that the element can be REMOVED from the document and moved elsewhere without changing the meaning of the document. I assume the intention is that you say "see figure 3" like a textbook. If you write something like:

  >I was familiar with it’s interface from few screenshots like the one shown below
There's no way to move the <figure> containing the screenshot to a sidebar for example, because then what you wrote wouldn't make any sense.

If <figure> was being used the way it was meant to be used, it would be trivial to write a browser plugin that hid all figures and listed them in a sidebar. But nobody is following the spec. Everyone in the planet is using <figure> as if it was just a container for an image with a tag for the image caption (pandadoc, wordpress, etc., are all doing this!), so that's in practice what it is now.

This is what makes it so hard to understand who would even consume these tags for something useful. It seems every time they're widely used, they're widely used with semantics that don't match the spec, so if you wrote a markup-based tool, you would have to go against the spec, which means there is no point in having a common HTML spec in first place, just make your own API like microdata. I'd even say the only reason that browsers work at all is that authors are forced to see their websites through a browser so the markup has to at least work in the browser. For every tool that authors don't all use (such as a browser plugin that hides figures), there's no way to guarantee the author used the markup correctly, so it's not something that can be relied on.

[1] https://html.spec.whatwg.org/multipage/grouping-content.html...


Oh, you are indeed right about the figure element! Sorry for missing your point at first.

I would not blame this on pandoc though, it's my mistake of missing what the intended purpose of figure element is, because I haven't studied the spec and browsers doesn't do anything useful with it (as you pointed out). That said if I used pandoc to generate pdf via latex, I would have noticed that, since the figures are repositioned as expected in such case.

And while I agree that in the current state, it's kind of pointless for browsers to try to take advantage of this element when most of the real code is against the spec, I believe that it didn't help that browsers didn't do that in the beginning when nobody was using it yet. But since the spec is not explicitly asking for anything, browsers did the bare minimum.

While I can imagine 2 use cases already:

i) better layout of printed page (eg. when I try to print my blog post, firefox will happily cut a figure in half if I select print on a5 paper even though it could try to reorganize it bit smarter ...)

ii) similar to what you describe, an ability to show the figures in a separate window so that you can see the text and figures at the same time (this is actually similar to a picture-in-picture like feature for images what I describe in the blog, and to be honest I would still find that kind of useful)


> Maybe he needs to come up with a better package manager and make a distro.

My guess is that all you need are docs more friendly to newcomers. See for example how would one do something like that with fedora:

https://blog.aloni.org/posts/how-to-easily-patch-fedora-pack...


As long as you are willing to study how your distribution does packaging, adding a build flag to already packaged tool is actually easier:

You could ask your package manager (or a distro build tool) to point you to the source code and scripts which were used to compile the package, install build dependencies, tweak the build, and rebuild. The hard work starts when you need to maintain your tweak on top of existing package in a long run.

I would rather say that as long as you are using build which is done by 3rd party (such as your GNU/Linux distro and not the author of the original project), rebuilding from source should be possible, since both original project and the builder needs to follow certain protocols and expectations for that to work (using common way to share code, using standardized build systems in a clean way ...). What hurts the open source approach is a case when the project doesn't follow usual conventions as it doesn't expect people to rebuild it and provides static binary builds as a main way to use it instead.


> As long as you are willing to study how your distribution does packaging,

That's how I read the article, as saying "figuring out how to make custom package, is too difficult" compared to grab source, edit & compile.

Author does have a point there.

That said, eg. with Gentoo, iirc you'd have 1 huge download directory with vanilla source archives, and a per-package directory with a couple of Gentoo-specific patches. Perhaps with some config file to determine which of those are applied.

If making custom mods to a package is a recurring task, then it shouldn't be too hard to go through the docs & figure out how to add another patch.


> The hard work starts when you need to maintain your tweak on top of existing package in a long run.

Which is why you have a good motivation to upstream your patch at that point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You