For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more weekay's commentsregister

Depends on the country though. It’s not easy to get a no questions asked w/o having to prove who you are SIM as PAYG in all places. What I feel we need is from the mobile operators a service like “hide my email” option from iCloud you keep your mobile number but give out per use generated or pre generated throwaway disposable numbers that can be used similar to the use case described in the article w/o having to go and buy a new sim. Shouldn’t be that hard to setup from a mobile operator perspective surely ? Also btw if you think your number which you shared with your family and banks are safe and not outed it’s a myth . Even on apps like LinkedIn I have found if you provide access to your contact list they get to harvest your contacts and the numbers.


Sure, but I'm not trying to protect my privacy from the phone network or the government.

I just want my delivery driver to stop sending me creepy texts.

And if my pizza place leaks my number, I can throw it away without having to tell friends to update their address books.


I think we're fortunate in the UK that you can just pick up a PAYG SIM at almost any cornershop. Perfect for these "verification" code SMSs. One of the main reasons I got dualsim phone as well.


I've not bought them in a while, but don't most networks still require you to "activate" them, setting up an account with your personal details? It's possible that they don't rigorously check them though


I very much like if Apple adds "Hide my number" option to iCloud+.


> no questions asked w/o having to prove who you are SIM

The blogger said nothing about all this. That's totally over the top just to obtain a disposable number. He's talking about a prepaid SIM; but when I signed up to AT&T they didn't ask and I didn't tell, but who cares? That's not the point here.


Given that many services do not accept VoIP numbers now and some even reject prepaid mobile numbers I'm not sure how long a service like this would work. Although maybe if Apple made it a default perhaps it couldn't be blocked like with their private relay or iCloud hidden emails.


Seriously? Who rejects prepaid numbers? I haven't had a contract phone number in 20 years.


I can't complete opening an account with Capital One because I have prepaid wireless. When talking to customer service, they said this happens and there's no fix. Ubisoft Connect (their Steam competitor) also rejects it.


From personal experience, if you can in your area, ditch em. There's a bunch of banks and brokers who will not require a permanent phone if you have a physical address or similar. As long as you can prove habitation and/or that you're a real person, there's almost no justification. Mine's fine with me rarely even having a functional phone (I top up pre-pays only when I need them. Most months nobody I care about directly calls me.)


It was a F2P first person shooter that uses phone numbers for verification. CoD I believe, but it looks like blizzard does it as well for battle.net.

I remember it because before that I assumed they had no way to determine how the phone number is purchased, but I think they can just blacklist prepaid providers.


This is very common in retail. What tends to happen is that a retail buyer would work with a supplier and order a product in . That product packaging will have a promotional or information site it will link to & is printed as a QR code. From a buyers perspective they are doing this as it’s a way to provide value or information to their customer and supplier fronts the cost of this. The IT teams within retail aren’t kept in the loop and neither are they aware of a site that is hosting any of this content. All the content and marketing of this is done by a agency who are hired and managed by the category or merchandising teams in the head office . Product sells for a quarter or maybe 6 months at the most . Products get rotated and goes back to warehouse until such time in a year they need to liquidate the stock and do promotional discount pricing as part of back to school or Black Friday etc., By then the agency that fronted this and created the site has lost its domain or the site isn’t maintained/ gets compromised etc., At that point the product is on the shelf , domain is hijacked or the hosting provider / host gets taken over by a malicious actor. Then the IT / security teams in the retail organisation are asked to step in and support their business colleagues. Every major retail corporation will have this happen to them at least once a year. IT teams will have a laugh about this and nothing ever changes as a process as it doesn’t really affect the share value or damage the reputation of the retailer as such


Always remember to convert a QR code back to text before printing / distributing it.

There are some shady QR code generation sites on the Internet that produce codes that work for a week or so, but go to some unexpected third-party domain that redirects to your site. Later, you find out that you have to pay them a subscription fee if you want the QR code to keep working.


This is why I've grown even more careful about introducing new domains in production.

If I keep everything in one or a small amount of production domains, even if a product is shut down, a project ends, and everyone has long forgotten about it - it's still hitting my load balancers and I can deal with it. Cheaply, too. Some 404 pages delivered by a loadbalancer probably cost cents or less per month. I can also make it a cute branded image based on a few conditions as well if you give me that.

And some POs are arguing how this is controlling and how this might be constricting freedom and such. And, yes it is. But on the other hand, we won't have porn hosted on something the company once promoted. Unless the company wants to rebrand as such.


Yeah registering a domain is sort of a permanent act. If you ever let it expire, someone else can take it over and start receiving all emails, http requests, and anything else directed at services you used to run there. And possibly responding to them. They'll easily get certificates to verify the domain, since all that's needed to do that is control of the domain.


"a domain is forever" is kind of a scary thing


DNS was designed to delegate subdomains, and we should do it.

But easier for every team to grab a new domain.


From a customer point of view this is also a lot more trustworthy. Hypothetical example. If i visit annualpromotion2023.pepsi.com. I know for sure Pepsi owns the domain and would be more comfortable putting in personal information there, compared to pepsiannualpromotion.com, that is a lot more likely to be a scam.


One of my customers found out someone copied their website almost to perfection under a different domain name and started advertising their products for way less prices. They wanted the website gone of course. So now i had to explain our company doesn't have any jurisdiction on some website hosted on the other side of this planet.


I have recently received a subpoena from 'the other side of the world' regarding a domain I registered recently.

It seems the domain in question was one of many involved in IP infringement against a global fashion brand long before I came along. A simple check of registrar data would confirm I have f all to do with this.

Either some big law firm knows different from you or are just scamming their client? Surely this can not be the case?


I don't think much of the technical or moral chops of big law firms involved in the intellectual property game, but this seems reasonable to me.

The site was being used to do some bad thing to their clients, they're justified in assuming the current owner might know something about it. Changing registrars and WHOIS info is exactly the kind of thing a shady site might do to throw off investigation. If you're fortunate, you'll at least get your reply read by someone who can understand the plain English of "it wasn't me, I bought the site since then" and doing a bit of research to cross check that.


What this needs is a mitm service that gives shortened/custom urls like https://prom.os/paw_patrol_biscuits that redirect to the vendor's site, and when the promo/limited-run is over the url can be 'turned off' in a control panel and then default back to a 'This promotion has ended, but visit <vendors general site>' for more information on products' or such.

I guess it'd be hard to get companies to use such a service, except for situations that cause product issues like this one we're seeing.


A lot of online QR code generators provide this service (often by default without making it clear that they are injecting their URL). It can definitely be useful to "change the URL" after deploying the code, but you still have the same problem that you don't control the domain. If you stop making payments or the company goes out of business then you are out of luck.

IDK if any of these services support custom domains. So that you could have qr.mycompany.example or whatever. That way if something goes wrong with the service you can at least direct it to something else.

But I think in general you should control your URLs. Especially for printed material. Often this would be something like a short URL or some other small name that can be directed to the intended final site and changed at any point.


In other words, the biscuits need more shortening.


Excellent write up. This is the kind of content that I love to see here. Great to see how chatgpt helped you along the way to learn and solve a problem you weren’t very well versed with. Resonate with the comment that you were the developer and chatgpt was the coder ! Exactly how I felt with some of the projects. Also indeed true that Simplicity is indeed complex .


Seems interesting will definitely give it a try. Few observations/ questions from reading the documentation- > Continue will only be as helpful as the LLM you are using to power the edits and explanations

Are there any others apart from gpt4 suitable for programming copilot tasks ?

> If files get too large, it can be difficult for Continue to fit them into the limited LLM context windows. Try to highlight the section of code that include the relevant context. It's rare that you need the entire file.

Most of the value and real world use case benefits come from usage in a brownfield development where a legacy code isn’t well understood and is large (exceed current LLM context ?)

> telemetry through posthog Can organisations setup their own telemetry and development data collection to further analyse how and where the Copilot is being used ?

> Finops How does one get visibility of token / api usage and track api spends ?


Appreciate the deep read into the docs!

> We've found claude-2 very capable, especially for chat functionality, and especially in situations where you're looking for the equivalent of a faster Google search, even smaller models will do. For inline edits, gpt4 well outperforms others, but we've only optimized the prompt for gpt4. There's a LOT of tinkering to be done here, and it seems clear that OSS models will be capable soon.

> Definitely value there. We have an embeddings search plugin heading out the door soon, but we very consciously avoided this for a while - it obstructs understanding of what code enters the context window, and we think transparency is underrated.

> Yes! You could have your own PostHog telemetry by simply switching out the key, but we also deposit higher quality development data on your machine (we never see it). Benefits being both 1) understanding ROI of the tool, and 2) being able to train custom models.

> This is a reasonable request! We'll add a feature for this. Right now, you can use the usage dashboard of whichever provider's key you use.


Long story short it’s still a mystery why it’s broken and what caused it.


Some say it is because of equipment failure, ground settlement, wind vibrations, and/or some combination of the three. But no one knows for sure.


I suspect that somebody knows for sure, they just aren't announcing it to the press.


I suspect the problem could be multifaceted and they are continuing to explore the situation. And of course stamp out responsibility. Public relations and communication is not a concept in this part of the world. Which is why we won’t be learning anything until the problem is completely gone.


>Are you all working on providing an easy way to maybe use LLMs for chatting/search without sending my data to OpenAI?

From a brief look at the github repo there seems to be need to setup OpenAI API key so not sure if this currently has the ability to chat / search w/o sending or needing a OpenAI API access ?


Search does currently work 100% offline - none of your data would be sent to OpenAI if all you're doing is searching for your local documents. You could completely disable your internet connection and it would still work.

Chat currently is only integrated with OpenAI because it had the highest quality + lowest barrier to entry. We're experimenting with open source LLMs and hope to have an alternative available soon.


Previous discussion on the topic from yesterday-

Apple forced to make major cuts to Vision Pro headset production plans https://news.ycombinator.com/item?id=36573667


Comments moved thither. Thanks!


In fact there have been at least 7 submissions of this story in the last day:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Does anyone else think submitting the same topic a day late should result in a suspension?

I think submission quality would improve if people were forced to lookup if the topic has already been discussed.


HN already prevent submitting the same link in a short time frame. Quite often, submissions can end up without activity and resubmitting helps more people contribute to the discussion.

Topics aren’t “finished” just because /some/ HNers discussed it one day.


You can't force people to do stuff like that.


Right but couldn't that type of system be roughly accomplished by looking up the canonical URL and/or title from the meta tag (from a scrape) and search against the last posts?

Having a stronger policy on repeated duplicate offenders should be a thing. It just farms points that can be later used in a bot farm.


It's much harder than it looks and almost impossible to get consistently correct.


[flagged]


Kind of interested (with no judgment implied), do you think Apple is paying people to make posts here (weirdly of stories talking about their failures) or are you using astroturfing to refer to Apple’s extremely successful work building public mindshare resulting in a fanbase that wants to constantly discuss their products (and a hate group that’s still obsessed with talking about them)?


"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."

https://news.ycombinator.com/newsguidelines.html

https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...

Edit: you did this in at least one other place recently - https://news.ycombinator.com/item?id=36544081. Please don't.


Journal apps are a start


Why not rely on an robots.txt entry instead of having to explicitly include in http header to not have ai ?


Reading the related GitHub issues the dev seems to just not understand HTTP or web crawling etiquette before you get into the “actually AI is good for creators” pitches. The damage is probably done - even if this gets fixed, unethical people building datasets will just use the old versions.


Because - according to the developer - respecting robots.txt is unethical.

His contention is that denying content to AI tools deprives people of their right to better AI tools...


It’s a straw man argument, which gives you a good idea inside the psyche of the dev.

If anything picks up a URL and uses it later, that is definitely a web crawler.


Seems pretty clear that it's meant to be malicious compliance with consent, with consent being automatically assumed unless you say no to this specific scrapper, as though there were even a reasonable chance millions of sites could possibly know about the exact tag.


Probably because he knows doing so would make his life harder and give him less data to scrape.


I'd also be curious what headers he sends like USER-AGENT


that's what I was thinking too.


https://www.nytimes.com/2021/11/09/well/mind/john-sarno-chro... “Healing Back Pain: The Mind-Body Connection.” By John Sarno . This definitely helped me through my back pain


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You