For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | ninjin's commentsregister

In addition, here is the Bright White Lightning discography:

https://dataairlines.bandcamp.com/album/bad-teeth-data024

https://dataairlines.bandcamp.com/album/dirty-nails-data038

Having not seen anything from them since 2014, so I am very happy for another track.


> I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

The scene you mentioned (amazing movie and holds up to this day) with the Major and Togusa:

https://youtube.com/watch?v=VQUBYaAgyKI

While I frequently use a similar argument, "We need someone 'untainted' to provide a different point of view", my honest opinion is somewhat more nuanced. These models tend to gravitate towards some sort of level of writing competence based on how good we are at filtering pre-training data and creating supervised data for fine-tuning. However, that level is still far below where my current professional writing is and I find it dreadful to read compared to good writing. Plenty of my students can not "see" this, as they are still below the level of current LLMs and I caution them to overly rely on LLMs for writing as they can then never learn good writing and "reach above" LLM-level writing. Instead, they must read widely, reflect, and also I always provide written feedback on their writing (rather than making edits myself) so that they must incorporate it manually into their own and when doing so they consider why I disagree with the current writing and hopefully learn to become better writers.


Bitterpilled. Wow, the audio mixing on that clip is great. I miss art like this. I'm afraid that nothing will recapture the way I felt watching GOTS the first time.

There are some many pieces of media that I wish I could fully scrub my memory of to experience for a second time.

You just invented a category for a list! Going to have fun thinking of mine.

Indeed, Heathrow security is the rudest I have experienced. They get aggressive if you so much as ask a question. Furthermore, I have on numerous occasions had them argue with me to go against the medical advice from both doctors and medical advice manufacturers. Last time going as far as claiming that a scanner does not emit electromagnetic radiation.


Would have loved to see how it holds up with some load via FastCGI and CGI (via slowcgi(8)), since httpd(8) can be used with both of them.


> Although I guess the argument will be that email clients should use AI to summarise the HTML into a plain text summary.

Or you could pass it through ~5,000 lines of C [1] and you will have it done in milliseconds even on hardware that would be old enough to drink.

[1]: https://codemadness.org/webdump.html


I think our contexts are all different. But, to share a different experience, as an academic (with plenty of conversations involving people in industry as well each year) I have used interleaved and bottom-posting for decades and it causes confusion maybe once a year at most and mostly because Microsoft's online client is broken and at times does not even render anything below "Dear Foo," in the HTML view (got to give this small start up in Redmond some more time though, we can not expect them to implement standards that have only been around for over 40 years).


Exactly. The problem is that by their very nature some content has to be dynamically generated.

Just to add further emphasis as to how absurd the current situation is. I host my own repositories with gotd(8) and gotwebd(8) to share within a small circle of people. There is no link on the Internet to the HTTP site served by gotwebd(8), so they fished the subdomain out of the main TLS certificate. I am getting hit once every few seconds for the last six or so months by crawlers ignoring the robots.txt (of course) and wandering aimlessly around "high-value" pages like my OpenBSD repository forks calling blame, diff, etc.

Still managing just fine to serve things to real people, despite me at times having two to three cores running at full load to serve pointless requests. Maybe I will bother to address this at some point as this is melting the ice caps and wearing my disks out, but for now I hope they will choke on the data at some point and that it will make their models worse.


The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.



Correct, the pirated music library was before they exited the closed Alpha.


No, that's what they ran on when the general public could join on a referral basis. They called that "beta".

The technology was already proven, i.e. The Pirate Bay and other torrent networks had already been a success for years. What Spotify likely aimed to show was that they could grow very fast and that their growth was too good to just shut down, like the entertainment industry tried to do with TPB.

After they took in the entertainment oligarchs they cut out the warez and substituted with licensed material.


Not sure if it was called "beta" or "alpha" and "closed" is of course up to interpretation, but it was indeed by invitation. Swedish law at the time (still?) had a clause about permitting sharing copyrighted material within a limited circle, which I know Spotify engineers referred to as somewhat legitimising it. I also know for a fact that once the invite-only stage ended there was a major purge of content and I lost about half of my playlist content, which was the end of me having music "in the cloud". Still, this is nearly twenty years ago, so my memory could be foggy.


When I first started using Spotify, a lot of the tracks in my playlists had titles like "Pearl Jam - Even Flow_128_mp3_encoded_by_SHiLlaZZ".

Always made me chuckle, it looked like they had copied half of their catalogue from the pirate bay. It took them a few years to clean that up.


Yes, when the entertainment industry came onboard they immediately made the service much worse. I reacted the same way you did.

IIRC, 2008, a little less than twenty years.


> The technology was already proven, i.e. The Pirate Bay and other torrent networks had already been a success for years.

Spotify showed that you could have a local-like experience with something backed by the cloud. BitTorrent had never really done that. The client wasn't that good, and you couldn't double click and hear a song in two seconds.

The way you said that made me think you might be remembering when it was partially P2P, but I don't remember the timeline, it was only used to save bandwidth costs, and they eventually dropped it because network operators didn't like it and CDNs became a thing.


If you don't remember, why speculate?

Ek had been the CEO of µTorrent and they hired a person who had done research on Torrent technology at KTH RIT to help with the implementation. It was a proven technology that required relatively small adaptations.

They moved away from this architecture after the entertainment industry got involved. Sure, it was a cost issue until this point, but it also turned into a telemetry issue afterwards.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You