For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | mattdeboard's commentsregister

Are you upset people are being critical of a shabbily run government program?

[flagged]


Is this not a government program? Did someone in the cabinet choose to do this?

I’d prefer they not release shoddily build propaganda apps


https://45press.com/ would be my guess.

Uh, yeah, dude, when Whitehouse.gov announces its new app, the app is a government program. Hope this helps but something tells me it won't.

He explicitly says he can't determine it, but that the location tracking as configured will turn on once the user grants consent. All true statements.

How would you have written it differently


"If the user chooses to opt-in and grants location-tracking permission, the app is then, and only then, able to track the user's location?"

You would be lying if you wrote that because you do not know if that is true.

But that's not true; it could easily fallback to other forms of geolocation like using the current IP.

That would allow you to see the local network IP (not actually sure you even get that, tbh). To get more detailed information about IP configuration, you need Location permission. Been there, done that. Most Android network information calls provide degraded information if you have not been granted Location permissions.

If an app can make an HTTP request, the app can know the user's public IP address and the geolocation derived from that.

This data has well-known limitations, but I think it is the fallback people are talking about here.


Good lord. So could literally any app on the planet

People are determined to make the future of code an even bigger dumpster fire than the present of code.


Do you not review junior developers' code? I don't understand your point


Your comment seems to imply AI is currently at a junior developer's level -- 12 months ago I would have agreed (like I mentioned in my parent comment, both near the end and about the "latter" team I was a part of), but it's gotten quite good over the past few months.

When even Linus Torvalds compliments AI code (ref: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fa...) I think we can say he wouldn't have said that about any junior engineer.

That's not to say it won't ship bugs, but so does any engineer (junior or senior). It's up to you as to what level of tooling you surround the AI with (automated testing / linting / etc), but at the very least it doesn't also hurt to have that set up anyways (automated tests have helped prevent senior devs from shipping bad code too).


Ok but are you arguing against code reviews of AI generated code?


I also have a a scratch-my-own-itch project[1] that leverages an LLM as a core part of its workload. But it's so niche I could never justify opening it up to general use. (I haven't even deployed it to the web because it's easier to just run it locally since I'm the only user.)

But it got me interested in a topic I have been calling "token economization." I'm sure there's a more common term from it but I'm a newb to this tech. Basically, how to optimize the "run rate" for token utilization per request down.

Have you taken a stab at anything along this vein? Like prompt optimization, and so on? Or are you just letting 'er rip and managing costs by reducing request volume? (Now that I've typed this comment out I realize there is so much I don't know about basic stuff with commercial LLM billing and so on.)

[1] https://github.com/mattdeboard/itzuli-stanza-mcp

edit:

I asked Claude to educate me about the concepts I'm nibbling at in this comment. After some back-and-forth about how to fetch this link (??), it spit out a useful answer https://claude.ai/share/0359f6a1-1e4f-4ff9-968a-6677ed3e4d14


Thanks for the question and links.

I haven't done any token/cost optimization so far because a) the app works well-enough for me, personally; b) I need more data to understand the areas to optimize.

Most likely, I'd start with quality optimizations that matter to users. Things to make people happier with the results.


Interesting note about left join in the CTE being converted into an inner join. Didn't know that


Yeah, it's one of those things that is hard to catch unless you've been bit by it before and know to look for it. Analytics teams at scale are at a much higher risk of this sneaking in, which is where automatic blocking with Lexega is helpful. No one wants to have to explain to their leadership why their dashboards were wrong from such a subtle SQL bug months down the road.


Honestly this insight feels very actionable to me. I do more SQL reporting for biz analysis than i would like (i'm a dev not a biz analyst).

I may take a crack at this tool next week.


Looping back here - trial licenses can now be obtained instantly through the free trial form on the website with just an email. No outreach needed on your part. Here for support if you decide to try it.


Hey just saw this. I will probably take a look at this on Monday.


Would love to help out! Shoot me an email at trial@lexega.com for a 30-day free trial license.


did you use AI to write this response? Why?


Not many, but junior devs grow into senior devs who do, which is the point. If there are no junior devs there is no one growing into those skill sets.


My work-issued dev device is a Surface Pro 10. I can't use WSL2 for various regulatory reasons. I will never, ever work on software like this again. Worst development experience of my life because of what a miserable dev env windows is.

I know that's been a meme since forever, but my first hand experience supports it to the extreme.


it means their servers were unreachable due to network misconfig.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You