> Imagine the difference between "I want to give you feedback that you aren't spending enough time with new hires" vs "I know you've been wanting to spend more time with the new hires, why don't you take them for lunch and send me to your status meeting over Tuesday lunch time this week."
This is the proper answer. Ultimately, feedback should be about changing something. My experience is that most people are neither good at giving or receiving feedback, and that includes myself. There are more effective ways to change things.
OP's is useful when you have to give feedback, which is expected in most large companies in some form or other (evals, etc.).
One of the surest way to get your manager's back is to help them make their goals and put some stuff off their plates. Like other people said, most semi competent managers are aware of issues happening in their team. If you come up w/ some proposal to solve those issues, it will improve the team much more effectively than some feedback.
It also depends on your goals, but fixing some issues encountered by your manager is one of the most reliable way to promotion in up to mid size companies, unless your manager is a a*hole.
Japan suicide rate has decreased a lot, and is now almost the world average, it is actually lower than in the US and several European countries.
I agree however that this smells of orientialism. I don't speak anywhere fluently Japanese, but having lived there for 15 years, the only time I've seen or heard of the ikigai concept is in the "Book for foreigners" section.
For large companies, a big reason is to transform capex into opex, and the predictability. Moreover, large organizations tend to favor predictivity over levels, i.e. are ok to increase average if variance is decreased.
This. The beginning of my career on cloud was a POC, where the director shared with me this was a major driving factor (capex > opex), as well as some of the fringe benefits.
I got to see close up that a team of devs ran their whole solution (with a bunch of paying customers and everything) in the cloud, because cloud automation was good enough that they didn't need dedicated ops people.
Now I work for a cloud provider. I can't say that if I was running a business that I'd build it cloud-first instead of OnPrem. Certain use cases, sure. If I didn't need a lot of horsepower, I might build it on a cluster of VM's with some segmentation of duties - not quite microservices, not quite a monolith. Most likely if I was hosting on the cloud, I'd use the provider I work for, just because I know the system and how to get things done and how to talk to support.
I will say though - learning the ins and outs of cloud computing has made for a great career. Challenging, but lucrative.
FTA:
> Microsoft and Google decided not to officially comment on the survey's findings. However, a representative for one of the hyperscalers retorted that the figures seemed cherry-picked and pointed out that, as an example, customers using reserved instances could realize significant savings.
Reserved instances are a thing for sure. There's lots of other ways you can control cloud spend (enterprise agreements, dev/test subscriptions, spot instances, automated shut down / scale down, etc.) - it's enough complexity by itself that big companies hire entire teams of people to just work on tracking, projecting and controlling cloud costs.
It is true both countries were already industrious, but it was far from a given they could go back to their former self. They were both utterly destroyed, and things could have gone really badly, especially in Japan.
I can't find the reference right but I remember reading average adult calorie intake to drop to ~1200 kcals in 1947/1948 in "embracing defeat" by John Dower. That period has a huge influence on Japan to this day, including architecture of Tokyo through black market.
Both Japan and Germany had strong military govt culture, and became reliably democratic at the end of the allies occupation.
Sure things could have gone wrong. They did, for example, for Ukraine or most of Eastern Europe. But my point is, were these countries not previously industrialized, they probably wouldn’t have fared as they did now.
I guess this was a joke? But if not, fyi, ramen is from China. Sure, the Japanese have made it their own but even the Japanese don't list it in restaurant guides as Japanese food. There's a section for "Japanese food (和食)" and a separate section for "Chinese Food and Ramen" (中華閭里とラーメン) or it's in a separate category. It's still often called "Chinese Soba" (中華蕎麦)
It’s funny that ramen is literally lamian (拉面), but the Chinese will distinguish between Chinese lamian and Japanese ramen in their own country, it is considered foreign food although it is easy to find the Chinese food it descends from.
It's open source for sure. But yeah, there's no DIY required (which is why ZSA and their keyboards are so popular - it's the Apple of the DIY-programmable-ergonomic-keyboard world, providing excellent build quality and great experience for a price).
If you have access to the query log, aka "who makes which query in what context"), you can use see which queries are "close" to others in context.
For example, with session, you can detect manual query rewriting, and use this as a signal to see which queries are close to others in the time context. You can do various fancy things from just that.
Nowadays, a simple way to start would be to use SOTA LLMs to generate synonyms offline, and use this for query expansion at query time. At least in a context where queries are small, that should give decent results. This has however diminishing returns because of cost (the more synonyms the more expensive querying the index), and also you lose precision with diminishing returns on increased recall.
Ofc, for complex search like google, I am sure it is much more complicated
Re: LLMs, I was trying to better understand how pre-LLM search worked, hence the interest in the topic.
Any chance you have any open source links that discuss how you practically operate a system based on the concept you describe (manual query rewrite w/i a session as your data set)? Perhaps it's obvious to an NLP person how to reduce that "idea" to practice, but it is not to me!
You're definitely right about the idea though - a former Search engineer obliquely mentioned that this sort of session based manual query rewriting was very core to how the synonym system worked.
It is hard to find modern references on this. When I led a search group, coming from an ML but non search background, I found the following most useful
What is your goal ? 1) Know more about how they work on the academic side ? 2) Be able to work in a company that work on LLM ? 3) Be able to work with LLMs 4) agents ? Each of those goal may require different learning "streams"
The book "NLP with transformers", or fast.ai is good for 3). For 1), assuming you do know how they work, I recommend you start reading papers.
I find the discussions around "prompt engineering" to be rather pointless and they are quickly obsolete anyway (newer, more powerful LLMs makes it more and more obvious)
I'm in the exact same situation as you rn. Freshman year of college and really interested in LLMs. Did you decide how you're gonna go about it? Maybe we could share resources
I'm focused on finals at the moment but I'd be more than happy to share what I've been looking at so far (I plan to look into this a lot more in the summer).
When it comes to the theory, 3blue1brown released some really nice videos that solidified a lot of my current understanding, especially with the attention mechanism. I think I am also going to do a mixture of reading papers/watching youtube videos on things that are interesting (ex. Qlora for fine-tuning or diffusion models for image generation) and trying to build out a simple implementation myself and see where that takes me. Maybe I'll start out with the Karpathy nano-gpt videos but try and do it with a different data set.
But for people like myself who lack the math background, data, and compute to be able to train very strong LLMs, I think it is also a good idea to try and build some projects/apps that use a fine-tuned LLM, or just call the OpenAI API.
I'm still a bit lost myself, but in 2 weeks when I'm done with exams, I'm more than happy to keep exchanging resources with you.
Perfect. I have my own finals in a couple weeks haha, planning to look more into this and possibly build a project over the summer. 3B1B's videos sound like a good resource, and the nano-GPT videos look really useful too. I'll share what I come across as well!
This is the proper answer. Ultimately, feedback should be about changing something. My experience is that most people are neither good at giving or receiving feedback, and that includes myself. There are more effective ways to change things.
OP's is useful when you have to give feedback, which is expected in most large companies in some form or other (evals, etc.).