For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | lateforwork's commentsregister

Copilot is just Microsoft's term for AI. How many products have Copilot? Just about all of them.

> Copilot is just Microsoft's term for AI.

This comment really helps me put things in perspective.

I'm guess now that it's Microsoft's way of naming their LLM-powered products/features, the same way "Azure" is basically their codename for "cloud".


I’ve absolutely seen adverts on TV in the UK by Microsoft advertising Microsoft Cloud. Azure was not mentioned anywhere…

Maybe that's because they don't want people who've never heard of Azure to just let it blend into the wide spectrum of cloud products whereas Microsoft is something almost everyone would recognize.

Except they named their local hosted version of TFS/VSTS Azure DevOps Server (where the cloud version is Azure DevOps Services).

They just like branding their dev tools for whatever they're pushing at the time. In 2002 they named Visual Studio "Visual Studio .NET".


That's because TFS/VSTS followed the same naming convention where the "S" stood for either Server or Services. Once they rebranded the Azure-backed hosted version Azure DevOps Services, then it no longer really made sense to do anything but rename the self hosted version in the same fashion.

It would have been more confusing to have Visual Studio Team Server and Azure DevOps Services being the same product but hosted differently.


Not just developer tools, reusing trademarks in general.

At one point the next version of Windows Server 2003 was going to be Windows .NET Server.

Also Windows CE, Outlook Express, Xbox App, Xbox Game Pass for PC, Visual Studio Code, Visual Studio for Mac, Microsoft Office Excel, Microsoft Office Word, etc.


There is no perfect pasta sauce.

Only perfect pasta sauces.

Howard R. Moskowitz is an American market researcher and psychophysicist. He is known for the detailed study he made of the types of spaghetti sauce and horizontal segmentation. By providing a large number of options for consumers, Moskowitz pioneered the idea of intermarket variability as applied to the food industry.

https://en.wikipedia.org/wiki/Howard_Moskowitz


It makes sense. And Google is its own way to name all AI products “Gemini”.

Which is unusually simple. I would expect Google to use 10 more marketing names simultaneously without any logic to the product lines.

Next year they will introduce "hAIngouts" as an AI chat bot.

Ouch. Maybe "Google wAIve" for collaborative chats.

> Which is unusually simple. I would expect Google to use 10 more marketing names simultaneously without any logic to the product lines.

I think they were lucky this time that they landed a good name after only a few iterations that has since stuck.

Anyone remember Google Bard or LaMDA?


The r/Bard subreddit is still quite active for some reason. Reminds me of Google Glass.

I still like the name Bard

Didn't it start as Bard?

Well it depends on what you're talking about. The model names were originally called lambda, followed by palm and then finally gemini. The chatbot product was internally known as meena, launched as Bard, and then transitioned to Gemini once the Gemini model came out.

there is vertex ai, notebooklm, antigravity, nano banana, veo, lyria, the open models are gemma and gato

They’ve improved it since the initial launch when the service, model names and plan names all sounded similar and contradictory.

And IBM has "Watson"

SAP sales reps used HANA for "cloud" in the beginning... Which was bs back then and is today. But while everybody wanted to be in the cloud, SAP sales was scared to not be with the cool kids, when they do not somehow add to the cloud talk

And Silly has Silly!

Probably will use other astrology terms. Like the way android is named for desserts.

Google Scorpio will be their best model yet, except sometimes it will say things that cut you to the core.

But they put Gemini in google docs, they didn’t rename Docs to Gemini like Microsoft did.

I think they'll more likely launch competing AI projects like 'Aquarius' and 'Doh' or something

Great point. We’re about to get a wave of Apple Products with “Apple Intelligence” in a similar way.

If they ever get Apple Intelligent going.

Is it in solitaire or minesweeper?

Be careful what you wish for

Microsoft Copilot for Microsoft Flight Simulator

Microsoft Flight Copilot for airline pilots!

Microsoft should add a new game to Windows to accustom Windows users to Copilot.

There's a restrictions on games with even simulated gambling

Just what we need...AI agents that will play our games for us!

Didn't they kill those?

Does Office exist or not? I thought it was rebranded to Copilot365

Yeah imagine if they had unique product names for "AI in OneDrive", "AI in SharePoint", "AI in Outlook"... That would be even more ridiculous.

I think this is the right answer. I am frustrated by Copilot and by many aspects of AI, but to me it seems like straightforward branding: you use a Microsoft product, you want to use AI in it, you look for Copilot (name and/or icon).

To me, the issue isn't that they've named so many things 'Copilot' but rather that Copilot is in every goddamn product.


You are the second person that implies that "Copilot" is just a complement that identifies part of some software...

Microsoft has been replacing most of their brands by Copilot. There's no searching for it in a product, the product is named "Copilot".


Not if AI is ultimately a commodity, which it likely is. We don't want or need branded terms for other common features, like networking or files. In the early days of networking, before it was standard, there were attempts to brand things like NetBIOS with IPX and such. I don't want to repeat all of that every time some company wants to establish vendor lockin or branding.

Keep in mind that employers have to pay $100,000 in visa fees (in addition to competitive salaries) for each H-1B visa. Clearly these immigrants are not undercutting US workers. It is $100K cheaper to hire a US worker.

to clear up confusion, this 100k applies to brand new h1b petitions outside the country.

if you are already in the US it currently does not apply to you, or if you are transferring jobs with an existing h1b, or renewing your h1b.

source: former h1b

side note: as of february it’s estimated only 85 h1b petitions paid the 100k fee. the rest did not fall under the qualification.

https://www.staffingindustry.com/news/global-daily-news/1000...


Unless they get waivers, which I'm sure Larry has worked out with his buddy.

What should they be using instead? These astronauts are not Linux hackers.

These astronauts are trained to use the system NASA puts them in.

And ultimately they have a lot more important things to be doing then learning a different email client than the one they use at their desk on earth. This is an email client on a laptop, not a navigation system.

No they don’t. They’re our best and brightest, and they train for years at their one, important job, which is to use the system they’re given.

The mission of the astronauts on board is to test the damn Orion spacecraft in preparation for a human landing on the moon.

> NASA flight controller and instructor Robert Frost explained the reasoning plainly in a post on Quora (via Forbes). “A Windows laptop is used for the same reasons a majority of people that use computers use Windows. It is a system that people are already familiar with. Why make them learn a new operating system,” he reportedly wrote.

https://www.msn.com/en-in/technology/space-exploration/nasa-...


Maybe he should have designed the rest of the controls to look like the cockpit of 2003 Toyota Camry. It is a system that people are already familiar with. And actually reliable.

Toyota actually is involved in the Artemis program. https://en.wikipedia.org/wiki/Lunar_Cruiser

That’s awesome. I’m assuming there’s zero chance it actually gets deployed (for a value of zero that is less than the chance a moon base is actually deployed, also assumed zero) but if it does, apparently the controls will look like this- https://sj.jst.go.jp/stories/2024/s0124-01p.html

If they were, they'd probably have skipped the mission.

Do you think the US has idle capacity that can be activated at a moment's notice?

> Do you think the US has idle capacity that can be activated at a moment's notice?

I'm sure some very smart MBA increased profits by eliminating spare capacity or making cuts that would make it much harder to spin up. That's American business culture: focus on this quarter or this year, nothing else matters.


We can just buy them off Alibaba

STRICT has severe limitations, for example it does not have date data type.

Why is it a problem that it allows data that does not match the column type? SQLite is intended for embedded databases, where only your application reads and writes from the tables. In this scenario, as long as you write data that matches the column's data type, data in the table does match the column type.


>> Why is it a problem that it allows data that does not match the column type?

“Developers should program it right” is less effective than a system that ensures it must be done right.

Read the comments in this thread for examples of subtle bugs described by developers.


> “Developers should program it right” is less effective than a system that ensures it must be done right.

You're right, of course. But this must be balanced with the fact that applications evolve, and often need to change the type of data they store. How would you manage that if this is an iOS app? If SQLite didn't allow you to store a different type of value than the column type, you would have to create a new table and migrate data to a new table. Or create a new column and abandon the old column. Your app updates will appear to not be smooth to users. So it is a tradeoff. The choice SQLite made is pragmatic, even if it makes some of us that are used to the guarantees offered by traditional RDBMSs queasy.


> Why is it a problem that it allows data that does not match the column type? SQLite is intended for embedded databases

I'm afraid people forget that SQLite is (or was?) designed to be a superior `open()` replacement.

It's great that modern SQLite has all these nice features, but if Dr. Hipp was reading this thread, I would assume he would be having very mixed feelings about the ways people mention using SQLite here.


No, I think that people can use SQLite anyway they want. I'm glad people find it useful.

I do remain perplexed, though, about how people continue to think that rigid typing helps reliability in a scripting language (like SQL or JSON) where all values are subclasses of a single superclass. I have never seen that in my own practice. I don't know of any objective research that supports the idea that rigid typing is helpful in that context. Maybe I missed something...


> where all values are subclasses of a single superclass

I don't understand this. By values do you mean a row (in database terms)? I don't understand what that has to do with rigid typing.

Lack of rigid typing has two issues, in my opinion: First, when two or more applications have to read data from a single database, lack of an agreed-upon-and-enforced schema is a limitation. Second, when you use generic tools to process data, the tools have no idea what type of data to expect in a column, if they can't rely on the table schema.


First off, I am so glad the famous "HN conjure" actually worked! My "if Dr. Hipp was reading this thread" was tongue in cheek because on HN it was extremely likely that's precisely what would happen. Thank you for chiming in, Dr. Hipp - this is why I love HN!

So, in case you missed it, you're responding to Dr. Hipp himself :)

> I don't understand what that has to do with rigid typing.

Now I would like to learn a bit from Dr. Hipp himself, so here's my take on it:

Scripting languages (like my fav, Python) have duck or dynamic typing (a variation of what I believe Dr. Hipp, you specifically call manifest typing). Dr. Hipp's take is that the datatype of a value is associated with the value itself, not with the container that holds it (the "column"). (I must say I chose the word "container" here to jive with Dr. Hipp's manifest. Curious whether he chose that word for typing for the same reason! )

- In Python, everything is fundamentally a `PyObject`.

- In SQLite, every piece of data is (or was?) stored internally as a `sqlite3_value` struct.

As a result, a stack that uses Python and SQLite is extremely dynamic and if implemented correctly, is agnostic of a strict type - it doesn't actually care. The only time it blows up is if the consumer has a bug and fails to account for it.

Hence, because this possibility exists, and that no objective research has proven strict typing improves reliability in scripting environments, it's entirely possible our love for strict types is just mental gymnastics that could also have been addressed, equally well, without strict typing.

I can reattempt the "HN conjure" on Wes McKinney and see if this was a similar reason he had to compromise on dynamic typing (NumPy enforces static typing) to Pandas 1.x df because, as both of them are likely to say, real datasets of significant size rarely have all "valid" data. This allows Pandas to handle invalid and missing fields precisely because of this design (even if it affects performance)

A good dynamic design should work with both ("valid" and "invalid") present. For example: layer additional "views" on top of the "real life" database that enforce your business rules while you still get to keep all the real world, messy data.

OTOH, if you dont like that design but must absolutely need strict types, use Rust/C++/PostgreSQL/Arrow, etc. They are built from the ground up on strict types.

With this in mind, if you still want to delve into the "Lack of rigid typing has two issues" portion, I am very happy to engage (and hope Dr. Hipp addresses it so I learn and improve!)

The real world is noisy, has surprises in store for us and as much as engineers like us would like to say we understand it, we don't! So instead of being so cocksure about things, we should instead be humble, acknowledge our ignorance and build resilient, well engineered software.

Again, Dr. Hipp, Thank you for chiming in and I would be much obliged to learn more from you.


Thank you for the great explanation. But SQL isn't as dynamically typed as you suggest. If a column is defined as DECIMAL(8, 2), it would be surprising for some values in that column to be strings. RDBMSs are expected to provide data integrity guarantees, and one of those guarantees is that only values matching the declared column type can be stored.

Relaxing that guarantee has benefits. For example, it can make application evolution easier--being able to store strings in a column originally intended for numbers is convenient. But that convenience can become a liability when multiple applications read from and write to the same database. In those cases, you want applications to adhere to a shared schema contract, and the RDBMS is typically expected to enforce that contract.

It also creates problems for generic tools such as reporting systems, which rely on stable data types--for example, to determine whether a column can be aggregated or how it should be formatted for display.


>> but if Dr. Hipp was reading this thread

He is.


If you reached out and notified him, Thank you. I hope he has time to revisit - had a few more followups. Cheers!

No I did not I think he’s been a regular community member a long time he probably just saw it on front page.

When your application's design changes, you may need to store a slightly different type of data. Relational databases traditionally require explicit schema changes for this, whereas NoSQL databases allow more flexible, schema-less data. SQLite sits somewhere in between: it remains a relational database, but its dynamic typing allows you to store different types of values in a column without immediately migrating data to a new table.

This flexibility is convenient when only one application reads and writes to the table. But if multiple applications access the same tables, the lack of a strictly enforced schema becomes a liability. The same is true when using generic tools to process data in SQLite tables, because such tools don't know what type of data to expect. The column type may be X but the actual data may be of type Y.


> get it pumping out CVEs.

Is that a good thing or bad?

I see that as a very good thing. Because you can now inexpensively find those CVEs and fix them.

Previously, finding CVEs was very expensive. That meant only bad actors had the incentive to look for them, since they were the ones who could profit from the effort. Now that CVEs can be found much more cheaply, people without a profit motive can discover them as well--allowing vulnerabilities to be fixed before bad actors find them.


It's good and bad.

Not all CVEs are the same, some aren't important. So it really depends on what gets founds as a CVE. The bad part is you risk a flood a CVEs that don't matter (or have already been reported).

> That meant only bad actors had the incentive to look for them

Nah. Lot's of people look for CVEs. It's good resume fodder. In fact, it's already somewhat of a problem that people will look for and report CVEs on things that don't matter just so they can get the "I found and reported CVE xyz" on their resume.

What this will do is expose some already present flaws in the CVE scoring system. Not all "9"s are created equal. Hopefully that leads to something better and not towards apathy.


It also depends on if the CVEs can be fixed by LLMs too. If they can find and fix them, then it's very good.

Fixing isn't often a problem for CVEs. The hard part is almost always finding the CVE in the first place.

There are some extreme cases that might require extensive code changes, and those would benefit from LLMs. But a lot of the issues are things like off by one issues with pointers.


Fixing is now the bottleneck.

Most patches are non-trivial and then each project/maintainer has a preferred coding style, and they’re being inundated with PRs already, and don’t take kindly to slop.

LLMs can find the CVE fully zero interaction, so it scales trivially.


The biggest question is can you meaningfully use Claude on defense as well, eg can it be trusted to find and fix the source of the exploit while maintaining compatibility. Finding the CVEs helps directly with attacks while only helping defenders detect potential attacks without the second step where the patch can also be created. If not you've got a situation where you've got a potential tidal wave of CVEs that still have to be addressed by people. Attackers can use CVE-Claude too so it becomes a bit of an arms race where you have to find people able and willing to spend all the money to have those exploits found (and hopefully fixed).

How about releasing your own source code? It is a beautiful site, love the UX as well as functionality.

It screams vibe coding. This is the anthropic look. Just ask Claude and give it a screenshot.

Vibe coding is also why this was released hours after leak instead of days/weeks.

Of course I expect it is vibe coding. It would be insane to code anything by hand these days. But that doesn't mean there is no creative input by the author here.

>> It would be insane to code anything by hand these days.

I strongly disagree, but it made me chuckle a bit, thinking about labeling software as "handmade" or marketing software house as "artisanal".


There's a lot of errors you can miss by coding by hand, even as a seasoned developer. Try taking Claude Code, point it at your repo, and ask it to find bugs. I bet it will.

Claude is actually a crazy good vuln researcher. If you use it that way, your code might just be more secure than written purely by hand.


Sure, just like drug-sniffing dogs. Whether they've actually found something or are just pleasing the operator is another story.

Our organic artisanal code is written by free-range developers

"free-range" means fully remote, right?

Depends on what you’re building and whether it’s recreational or not. Complex architecture vs a ui analysis tool, for example. For a ui analysis tool, the only reason you code by hand is for the joy of coding by hand. Even though you can drive a car or fly in a plane there are times to walk or ride a bike still.

Depending on your standards and what company is making it you could even have “cruelty free.”

You well on the path to AI-fueled psychosis if you genuinely believe this.

I genuinely believe this. Even if you're inventing a new algorithm it is better to describe the algorithm in English and have AI do the implementation.

At least it's more productive than AI Derangement Syndrome.

Yes, it is vibecoded, had to get it like in 10-15 minutes. I did not know how to write a piece of code 4-5 months back.

Must everything be artisanal for some people? </s>

As a cynical modern eng look for landing page skills

Guess what? People have ZERO reason to Open Source anything now.

One reason, beside basic altruism, is so you can put the projects on your resume. This is especially helpful if the project does very well or gets lots of stars.

This said Jeavon's Paradox will likely mean far more code is open sourced simply due to how much code will get written in total.

We should be applauding the promotion of science and useful arts that genAI is fueling.

But egos are involved.


Why would you think that?

I'm a committed open source dev and I've flipped my own switch from "default public" to "default private".

Because nobody wants their shit stolen by some punk.

Go to https://www.copilot.com/ and ask a question. You'll see from the answers that it is indeed for entertainment only. It is ridiculously behind ChatGPT, and I don't know how that can happen since Microsoft has access to the same models.

it not as bad as in gpt 4.1 days, but i am wondering if it is just the system prompt or what is going on

Are you not entertained?!

Oracle database has unparalleled scalability. Ask someone who works at Microsoft SQL Server division what their bug database looks like. They will tell you that a single SQL Server instance cannot scale to the entire SQL Server division. Oracle on the other hand has a single database for the entire company. No other database is this scalable.

But Oracle is not just a database company. Oracle started as a database company, but today they are more an applications company than a database company. They have ERP back-office applications (finance, operations, HR), and CRM front-office applications (sales, marketing, service). Oracle bought a large number of applications software companies such as Seibel, PeopleSoft, JD Edwards, NetSuite and Cerner to become this big.

Of course Oracle is also a major cloud services provider and provide AI superclusters, and GPU instances from NVIDIA and AMD (context for today's layoffs).


I'm actually impressed by the amount of abuse our Oracle instances are able take from our developers.

Massive amounts of parallel single reads and writes with millisecond responses mixed with mega-joins of incorrectly indexed tables that works flawlessly "on their machine" that limp on well enough to sneak past performance testing with just the planner silently writhing in agony.


The original question does discount the capability of Oracle's database too much, as only something "golf executives" buy. When you have a large problem that is best solved with a relational model, Oracle delivers and can indeed be worth all the money and license hell involved.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You