For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | figassis's commentsregister

My process: start ideating and get the AI to poke holes in your reasoning, your vision, scalability, etc. do this for a few days while taking breaks. This is all contained in one Md file with mermaid diagrams and sections.

Then use ideation to architect, dive into details and tell the AI exactly what your choices are, how certain methods should be called, how logging and observability should be setup, what language to use, type checking, coding style (configure ruthless linting and formatting before you write a single line of code), what testing methodology, framework, unit, integration, e2e. Database, changes you will handle migrations, as much as possible so the AI is as confined as possible to how you would do it.

Then, create a plan file, have it manage it like a task list, and implement in parts, before starting it needs to present you a plan, in it you will notice it will make mistakes, misunderstand some things that you may me didn’t clarify before, or it will just forget. You add to AGENTS.md or whatever, make changes to the ai’s plan, tell it to update the plan.md and when satisfied, proceed.

After done, review the code. You will notice there is always something to fix. Hardcoded variables, a sql migration with seed data that should actually not be a migration, just generally crazy stuff.

The worst is that the AI is always very loose on requirements. You will notice all its fields are nullable, records have little to no validation, you report an error when testing and it tried to solve it with an brittle async solution, like LISTEN/NOTIFY or a callback instead of doing the architecturally correct solution. Things that at scale are hell to debug, especially if you did not write the code.

If you do this and iterate you will gradually end up with a solid harness and you will need to review less.

Then port it to other projects.


LISTEN/NOTIFY is not brittle, we use it for millions of events per day.

I find it very interesting that you assume this method would branch out to other projects. I find it even more interesting that you assume all software codebases use a database, give a damn about async anything, and that these ideas percolate out to general software engineering.

Sounds like a solid way to make crud web apps though.


GP is clearly providing examples of categories of tasks. Sure, not all languages do “async fn foo()”, but almost all problem domains involve some sort of making sure the right things happen at the right times, which is in a similar ballpark.

Holier than thou “yeah well I work on stuff that doesn’t use databases, checkmate!” doesn’t really land - data still gets moved around somehow, and often over a network!


Hadoop is open source: https://hadoop.apache.org But perhaps if one learns Google they could have found it? It's a tough chicken and egg problem.

I think of it not as "last step in thinking", but as "first contact with reality". Your mind is amazing and lying to you, filling in gaps and telling you everything is ok. The moment you try to export what's in your mind, math stops mathing. So writing is an important exercise.

This is fine. But companies seem to not have a control lever for employee wellbeing. If humanity works to solve problems, don’t you think overwork is also a problem that needs to be addressed?

Where are the lessons?

“The rationale for the change, according to Rodriguez, is that interaction data makes company AI models perform better. Adding interaction data from Microsoft employees has led to meaningful improvements, he claims, such as an increased acceptance rate for AI model suggestions.” Things the problem with promises. Of course using the data makes your model better, and everyone new this and that you’d be tempted to use it, and that is why you felt motivated to promise not to use it in order to gain adoption. So rug pulling by saying it helps improve your model is meaningless. You can say you lied. Lying is a real word, it should be used when it applies.

This just means we're in a simulated universe. He's respawned elsewhere.


The problem seems to be identify, a real problem, and looks like it will only get worse. Would creating a zero knowledge digital identity service (maybe centralized, maybe decentralized idk), where you prove you're human via your government id, passport, driver license, whatever, and the service can then attest you're a real person? So if I'm Digg, I would ask for some form of OAuth, the system would simply verify that you are in fact a human, and you would go on to create your account, forever verified. This way the identity service only does identity, it does not keep a record of where you are attesting, no logs, nothing, just your identity and basically saying yes/no, no sharing of ids or any other data.

So people would go through one hurdle in life, to get this id, and reuse it for every service.

Is this a worthwhile idea? I know many have tried, so help me poke holes in it.


No, the problem is people want everything for free. The solution is very simple. Charge $5 to open an account. Only allow a person to moderate one forum/community/subreddit/etc... Delete accounts that break rules ruthlessly. This would work, but no one on the internet wants to pay for a quality forum so we deal with the same crap over and over and over and pretend like there is some other soultion.


They want ad supported so they can block all the ads and let the suckers pay. Then complain relentlessly when the content caters to suckers.


1/ KYC is pricey, and users might not want to pay for it

2/ Spammer can hire real people to farm accounts

I think this idea might work if we

- create reputation graph, where valuable contributors vote for others and spread reputation

- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)


I think apps that want assurance of your identity should pay for your kya. They want valuable people after all, and this should go into their CAC. Users still pay nothing, the identity service does not care about their info, after verification, it drops any details, like uploaded documents, whatever, keeps a certificate.

The cost for this service is likely keeping up with ID systems for multiple countries, infra and support.

Potentially, if this is made into a protocol, it can be decentralized kind of like the SSL system, so each country manages it's own rules.


But they can just plug an AI into a verified account.


I am less concerned here. If you plug in AI into your identity, I guess your identity is revoked. I see the problem though, that once a service notices you're an AI, there is no way to block you because we don't really know who you are, only that you're human.

So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.

The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.


I don’t love the original idea because uploading identification is risky. You could just plug AI into a verified account but at least the vector is a single account instead of unbounded.


But then if the AI is detected that person can be permanently banned. No more AI. No new accounts.


So if someone compromises your identity they can unperson you? How will the AI be detected? By another AI?


"So if someone compromises your identity they can unperson you?"

You've identified a problem that unrelated systems also have. Like banks and identity theft. This solution isn't responsible for causing that problem.

"How will the AI be detected? By another AI?"

However a platform likes to. Let the best platform win.


Can we just use something like AWS instance roles where the key isn’t even known to the agent? The sandbox is authenticated, the agent running there sends requests, the sandbox middlemans the request to perplexity, which authenticates the sandbox.


I see how this can boost productivity...for those that today already produce value voluntarily. These will move one level higher. The rest with 100x the amount of performative work. Everyone will be busier created presentations and charts that no one needs and no one will read. Managers will ask for new presentations and reports every sync, and hours will be spent discussing things that don't actually matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You