Do you get unlimited vacation though like Salesforce employees? When I worked there nobody worked those days anyway. Will this really have an impact on when people take time off?
PTO is usually at the discretion of the managers; they're more inclined to decline requests for time off when enough people are already taking off (to avoid the situation of someone on the skeleton crew getting sick and not enough people are available to handle an emergency).
It really depends on your customer's needs; slack going down for a day or two over the holidays is unfortunate but not the end of the world. Salesforce going down could, I imagine, easily cause massive financial losses.
I'd like to think it should be easy enough to put people on emergency call rather than forcing them to put 8 hours of butts in seats for no reason, but apparently that's not how Salesforce wants to roll.
If you need bums in seats the professional way to do it is to have an oncall rota.
A rota is the positive affirmation that employee X will be here to support the team versus the negative one that well person X can’t not be here, because they’re the last one to take PTO.
Hahaha. Unlimited vacation actually means vacation only when your manager feels like it - which could be less than you'd get under a fixed vacation allotment. Many such cases.
The manager themself is likely going to take advantage of it too. Of course, there are enough Salesforce or former-Salesforce people in this thread stating how things actually are there (don't expect most people to be available most of December) that we don't need to speculate about how things could be or how abuse could still happen.
Some companies offer unlimited PTO. The catch is the culture often discourages people from taking more than a few weeks off. It's been shown if the company just offers N weeks people take more time off than if it were unlimited, which seems ironic.
> Some companies offer unlimited PTO. The catch is
...that unlimited PTO can't be accrued, and is usually still subject to approval to use like regular limited PTO, which makes it an easy mechanism for arbitrary favoritism by managers without the fallback that if your PTO isn't approved you at least keep and accrue your balance which can be used later in your career or cashed out, and which the company then has an incentive to allow.you to use to clear the liability from their balance sheet.
Also there's no vacation accrual, so there's no payout when employees leave. Because of this I always recommend the folks on my team to be proactive about taking vacations and odd days off for personal time, and to keep the amount of vacation they've taken in mind. If they don't use that benefit, it's their loss. If they take advantage of it, I believe it's better for team and individual well-being.
My company has unlimited vacation. I take time off regularly, and the CTO explicitly calls me out on it in all-hands meetings as someone to emulate.
A coworker that typically holds the heavens up like Atlas felt like he's burning out, so he's about to take 3-6 months off to do yoga in Switzerland or something. Everyone's happy for him. It sucks for project schedules I'm sure, but it's better to take a break and come back fresh than burn out and leave. Another coworker just came back from a 3 month sabbatical, and her fresh mental state will be a great productivity boost.
I just spent a week on a family vacation, then another week volunteering at a solar car race, and then another few days visiting family. I was looking forward to actually getting some focused work done this week, but now I'm out sick and only have the mental capacity to read HN and sleep. I suppose I feel about as productive as I usually do, since I can still respond to people on slack all day!
Yup, this is my experience too. The maximalist complaints about unlimited PTO being a scam seem to be a mix of:
a) sour grapes
b) people being unwilling to admit that they prefer paternalism instead of being capable of making their own decisions about how to balance work and life
(FWIW for me, making my own decision looks like "4-5 wks per year, not including a handful of ad-hoc 3-day weekends throughout the yr")
I think defined vacation plans:
1) give me a monetary benefit if my manager doesn't allow me to use my vacation
2) allow me to evaluate a job offer more concretely
3) allow me to negotiate confident that a change in manager won't wipe out my gains
Are there other benefits or forms of comp where it's preferable not to agree on amounts beforehand?
Not to imply you are, but it's certainly possible to over-analyze things. There are plenty of qualitative aspects to a workplace that will compound into financial gain. If you join a worthwhile project with a long term outlook and dedicated and supportive coworkers, then the expected value of any equity you earn will likely be higher than otherwise. My current company's leaders want everyone "doing their life's best work" and would genuinely feel bad if they were wasting everyone's time. That includes themselves, and with every year the company grows and gains more momentum.
To an extent I've been very blessed in my career, and I've always been able to accept new jobs based on my fit with the project and the team. I've also had success negotiating compensation to serve my interests. I've never seriously factored in vacation days into any sort of comparisons. I've never found myself in an environment where my manager had any expectation of proscribing when I should take time off. On the contrary, every manager I've ever had has been wholely supportive whenever I've taken time off. I think a large part of that comes down to my attitude and work ethic. I've certainly become spoiled, though. Unless my situation changes drastically, I wouldn't consider working somewhere with less flexibility than I currently enjoy. I would work for less compensation if it was the right project, though.
I find it completely plausible that some people would prefer limited PTO. My comment specifies the maximalist criticisms, which can't imagine unlimited PTO as anything but a scam.
Ive seen this a million and one times since I started working in tech, including my nontech friends being blown away by the perks of Google and concluding that they must work me like a dog (I worked a hard 35 hrs/wk at the time).
FWIW, people confidently disclaiming that this is always and necessarily how unlimited PTO works are wrong (which should be obvious....). I worked at a company that was extremely intense and deadline-driven and unlimited PTO was pretty great. In the almost 4 years I was there, I took an avg of 5 wks/yr, not counting the scattered 3-4 day weekends I took throughout the year. Hell, I think there was even a policy where managers would ping employees who took less than X days/yr (this happened to me during one of the lockdown years).
If you are consistently meeting or exceeding the expectations of an engineer at your level, you can do ridiculous things like take 60 days off a year. I know I did back when I was in a FAANG sorority^W eng team!
It also means if you are not meeting expectations, you’ll probably self-select to get zero time off.
Engineering pays well because you are supposed to work magic. If you can’t produce the magic then the dark clouds gather quickly. The elite sports team analogy is a good one. Keep up an unlucky run of bad games and you’ll get benched then sacked.
I don’t condone any of the above but this is the mindset of “unlimited PTO”. More so than that which I’m seeing others describe, here.
The magic is when you build an abstraction, then another one on top of it, and then a third one on top of that. Now you have a system which is starting to look inscrutable because of its complexity — “magic” if you will humor that language of mine — and yet you, the author, know that it is just three logically separate systems stacked on top of each other using each other’s interfaces.
Log data from each request. Map reduce it into an SQL table. Render the statistics with a JS graphing frontend. Use a pub/sub to update everything in real time. It’s all just plumbing but the overall effect is of something much more.
To an outsider, that is the magic. Software engineers know better — it’s all very well taking a design and implementing the vision but what is much harder is coming up with the design and idea in the first place. Some days you’re on fire implementing component after component without ever really having to rethink anything — you got the interface right first time, or you understood the problem well enough to keep a component small enough without it creeping into multiple areas of concern.
Some days… weeks… it is far from clear what to do next. You know you need to migrate this performance critical code that exists verbatim (copy and paste!) in these two repositories and instead turn it into a single first-class dependency which implements the compute using a GPU instead. Some of the infrastructure of the original code is written in Python. Some in bash. It was written with no thought for making factoring easier and the whole project is in use in production already.
When you’ve got a mental model of what to do you can churn through this kind of work in a week of intense coding. When you have no idea where to start things can feel very hopeless.
If you’re just programming then you’re probably very smart and able to figure these things out without breaking a sweat, or you have a tech lead
who nudges you in the right direction with calm competence that belies the fact they wake up at 4am every other night to reach for a legal pad to sketch out yet another potentially doomed idea to reduce technical debt that literally never sees the light of day.
You're just in the valley, dig deeper and you'll find the awe again. Just the things required to make networking even work are practically magic. Just the existence of error correcting codes makes me feel that way, for an example.
Effectively, it means you have to ask current employees how much vacation you get.
The theory is you can "take what you need" so long as you're getting your work done. In practice, company culture of course dictates what's considered reasonable, and as best I can tell it is highly variable between companies.
At my "unlimited" place, it seems like 3 weeks is considered reasonable, especially if split up. But I know someone at another company who has had trouble getting more than a week for a couple years running.
While they don’t specifically say what the “language server” is, if it’s the same protocol that powers VS Code[1], this is a big deal. It potentially means that language owners can build tooling that works across IDEs much more easily. Historically, JetBrains seems to have resisted the idea of a standard language protocol (which makes sense as it comes from Microsoft). My guess is it’s becoming impractical to reimplement every language feature for Typescript, C#, Go, etc. Embracing the standard LSP will mean less time spent on low level features and more time building JetBrains only value add.
At least with TypeScript, they've been using the official language server for a while. Sadly they still fall back onto their own JS-esque 'guess what this could be' mode in some situations where vanilla TS would have just emitted an error that the respective module could not be found.
For C# Rider supports Roslyn analyzers and code fixers as well, though I don't know with what performance impact (as ReSharper isn't based on Roslyn this results in all analysis work being done twice, which can be noticeable).
i don't understand how people believe LSP is perfect, it is not, the API is broken, it missing a ton of stuff and as a result client implementations are often broken
now that we have LSP running as an external process I hope they jettisoned java and are instead using C/C++/Skia or something for the editor so it's fast.
I think if somebody gets creative about it some of that cost can be reduced. Likely the big drivers in cost are going to be running utilities to support different uses (kitchens, showers, etc.) Maybe a new type of living space can exist where kitchens are shared? Or where there is a gym with private spa-like shower/changing space on a lower floor. I'm not saying these are good ideas (I certainly wouldn't live in that sort of place now), but maybe somebody can figure it out.
This isn’t unethical. There are entire companies that do exactly this. If done well, this is actually an extremely valuable service. It would be unethical if you are hired as an employee and outsourced your work, but as a contractor this is fair game.
What? A contractor usually will sign a contract and intellectual property agreements to their name, it's obviously not their right to share company information with a third party.
Subcontracting is not against the rules unless the contract specifically says so. Your job as the prime contractor is to manage the subcontractors so that means IP and privacy concerns etc... Houses are built by subs for example. Defense software is built by subs.
This is still an issue even if you're a contractor! The company should know who has access to their internal resources and codebase. If you are upfront about it, then of course it is fine. Make sure it's in the contract.
The vast majority of docs.microsoft.com is written in markdown. This project seems be both very easy to contribute as well as produces a great docs site.
MSDN has been around for much longer than Markdown has even existed, so I highly doubt that. Maybe the more recent stuff, which by the way is much much worse from a technical pov than their older technical documentations sadly. Compare the mess that is .NET Core / ASP.NET MVC documentation to the mostly excellent WINAPI documentation...
Having said that, https://docs.microsoft.com 's flavor of Markdown does allow for embedded HTML, like GitHub's and enough pages still use that feature that the conversion to Markdown is arguably incomplete. It isn't a big issue in practice, however; you can update markup from HTML to Markdown along with your other changes.
* As a recent-past engineer on the Windows team, I have a rather lower opinion of Windows API docs than you. :) The Windows developer platform has not had enough dedicated technical writers for years; our developer content teams are mainly editors of engineer- and PM-written original docs, which can lead to API doc sets with badly written pages, important missing information, or references to Windows-internal developer tools. I tried to channel my frustrations into correcting and extending my coworkers' writings, or into gently asking them to fix their omissions when I didn't have the free time to spend on the needed research.
Longer term this will also likely accelerate servers running on ARM. Writing software on ARM laptops that are deployed to production servers running on x86 servers will start to cause a host of new challenges. The switch to running ARM in production will have many advantages for developers and will likely be very attractive to cloud providers (AWS, Azure) as the costs of electricity for these servers may be significantly less.
If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM. Why do you think they would leave that on the table?
I realize the situation changes every time a new CPU comes out, but I have never personally seen a real workload where ARM won on energy efficiency and had reasonable performance. Tests like [1] and [2] showing x86 having an orders-of-magnitude lead on database performance vs. AWS Gravitron2 should give you serious pause.
Those tests are pathological cases because of bugs, and aren't representative of the performance of the hardware. (for that one I suspect that ARMv8.1-A atomics weren't used in compiler options...)
Well, the listed GCC options do not specify microarchitecture for either ARM or x86. So it's probably a k8-compatible binary on the top line, too. I'm also not sure why atomics would be important in a single-client mysql benchmark. Either way, the risk that your toolchain doesn't automatically do the right thing is one of the costs that keeps people from jumping architectures.
I presume this has to do with support for ARM with the database software.
PostgreSQL appears to have had ARM support for sometime. Mysql only added it with 8.0. As far as the other database options are concerned, ARM isn't even supported yet.
>If there was an actual energy efficiency advantage -- i.e. less power consumed for the same amount of work -- Google would already be 100% on ARM.
Energy Efficiency Wins on Server Workload is a very recent thing. To the point it is new as it basically start with N1 / AWS Graviton2. And even that is not ALL workload.
Not all software are optimised on ARM, compared to decades of optimisation on x86. Not to mention Compiler options and EPYC was running on bare Metal with NVMe SSD compared to Graviton 2 running on Amazon EBS.
And most importantly, you would be paying for 64 Thread on Amazon for the price of 64 Core Graviton 2. i.e It should be tested against a 32 Core EPYC 2 with SMT.
I still doubt G2 would win in the fair test, but it would be close, and it would be cheaper. And that is the point.
But up until recently Intel have had a manufacturing process advantage that has made it difficult / impossible for the likes of Google to source competitive high performance cores. That advantage has slipped away.
EPYC is an innovative product and comparing it with Graviton clearly shows that AMD has done a great job and that Graviton is not quite fully competitive yet (but not to the extent that those benchmarks seem to indicate, as others have commented).
I think its possible to overstate the energy efficiency gains from using ARM but all the indications are that ARM cores with fully competitive performance will emerge and that they will have some efficiency advantage - after all why would AWS be investing in ARM if not?
That's pretty silly. What makes you say this? There are large groups at Google responsible for buying every computer there is and evaluating the TCO thereof. With operating expenses exceeding two billion dollars per week, they have a larger incentive to optimize their first-party power efficiency than anyone else in the business. I'm fairly certain their first-party workloads (search etc) are the largest workloads in the world.
> With operating expenses exceeding two billion dollars per week
"Alphabet operating expenses for the twelve months ending March 31, 2020 were $131.077B, a 13.47% increase year-over-year."
Operating expenses is everything, including salaries, etc. Google isn't dealing with $2B/wk in power/server purchases, though they're still huge, no doubt.
Neither Amazon nor Apple are buying off the shelf ARM chips. They are both designing processors to meet their needs. Amazon bought Annapurna labs and Apple bought a slew of companies specializing in processor design.
I don't think they would switch even if they could in the near future.
The poster above says that Amazons "leapfrogs." They question is "leapfrogs where?" The fact that ARM cores cost 100+ times less than Intel, and are n times more power efficient was well known for the whole eternity.
What people don't get is that you get the whole platform on x86, and ARM is a very, very DIY thing, even if you are a multinational with lots of money on RnD.
> I don't think they would switch even if they could in the near future.
They've already brandished their ability to port their whole product from x86 to POWER[1], and deploy POWER at scale if they need to[2]. My personal interpretation of these announcements is they are made with the purpose of keeping their Intel sales representatives in order, but the fact that you don't also see them or anyone else brandishing their AArch64 port should tell you something.
I'd say less bluntly that Google is not as innovative as it once was. Old large companies ossify, and Google is not an exception. It failed on the social network (facebook), it failed on the instant messaging (whatsapp), it failed on the picture meme (snapchat), it failed on the video meme (tiktok), it failed on videoconference (zoom) ... you may see some kind of pattern there.
If asked whether google will succeed at something new (say, Fuschia), given those priors, my response will be: "no. it would be a surprising first in many many years. the company is on decline"
What we're missing is the connection between the services of the large companies: Google, Amazon, Microsoft all have an offering made of devices (hardware), websites (software) and cloud services. There seems to be a synergy, where you benefit from doing all 3 things in-house to reduce costs on your core product or to capture consumer minds. Microsoft is getting back in phones, with an Android offering. Amazon is not giving up on Kindle.
Notice how Apple is missing on the cloud services part here. They have some internally (for Siri) but they do not sell them.
Even if they don't start a cloud offering, they may sell their CPUs to others who will, before eventually rolling their own hardware.
This will give time to people who adapt existing server software to work better on Apple ARM CPUs (recompiling is the tip of the iceberg, thing about the differing architecture, what can be accelerated etc.)
We are seeing SIMD/AVX optimization for database like computation just now. It may take a while.
Apple is not missing out because it doesn’t jump on every bandwagon that is not part of its core competency. It’s still the most profitable out of all the tech companies.
Youtube requires a lot of server space compared to tiktok (<10 min means no ad money, so people make videos at least 10 min long!), and Zoom requires almost no space, while it can sell corporate subscriptions.
The only reason youtube still enjoys some success now is because it wasn't made in house, and the acquisition wasn't too badly managed. Grandcentral (parts of which still live as google voice) was a different story.
But it only shows how the last success google made "in-house" was a long, long time ago. The alphabet rebranding changed nothing. Since youtube, Google has turned into another yahoo for startup: a place they go to shrivel and die.
Ultimately moving data is the prime power consumer in a data center, so unless you are somehow drastically reducing the amount of data movement, you are not going to get drastic energy savings. This remains true even inside systems and CPUs, fast and wide buses and caches require lots of power, the main power cost in wide SIMD (AVX-512 vs. AVX2) isn't the computation itself, it's getting the data to and from the ALUs.
> Writing software on ARM laptops that are deployed to production servers running on x86 servers will start to cause a host of new challenges.
We already know what this is like, and the challenges are usually in the other direction (write/test on x86, attempt to deploy on ARM).
This is due to things like word alignment and memory ordering requirements. For the most part, the "extra complexity" in x86 allows you to ignore a lot of the stuff you need to pay more attention to on ARM.
Millions of iOS developers develop on x86 and deploy to ARM devices every day. The iOS simulator compiles apps to x86 and they run on top of an x86 version of the iOS frameworks.
The "x86 version of the iOS frameworks" is where those differences would be apparent. Apple may have done a good job with their simulator, but that's just it -- Apple did this work so that their iOS developers wouldn't have to.
As an iOS developer, you don't need to think about memory barriers if you don't want to, but Apple's simulator and frameworks absolutely do. That's the kind of work that awaits systems programmers who want to port high performance x86 server software to ARM.
x86 hardware is physically more permissive than ARM. Accesses that must be aligned on ARM can be left unaligned on x86, and they'll still work correctly. But x86 isn't going to explode either if you do happen align them. ARM is weakly ordered, x86 is strongly ordered, so you generally need more fences on ARM than you do on x86, but leaving these fences in place doesn't break things on x86.
My point was that code that works correctly on ARM usually works correctly without modification on x86. The opposite is generally not the case, even if the iOS simulator hides this for you.
People use Java and Go to write server software. They'll make things aligned properly, so it won't be a problem.
The real problem is memory concurrency model. You will hit subtle concurrency bugs with ARM which were not manifested on x86. And those bugs will be present in any language with threads and shared data. They are hard to debug and frustrating, because they'll lead to rare deadlocks, and resource leaks.
Now the question is, who's going to make those chips?
While the big cloud providers are certainly able to internalize design of their hardware, you still need to ship millions of servers to smaller players.
Ampere Altra and Marvell ThunderX are targeting this market. Qualcomm, Broadcom, and Nvidia tried earlier and gave up, but I wonder if we will see them enter back in.
Are you using the term "blind trust" in a non-standard way? My understanding is that it refers to arrangements where the beneficiary doesn't get to know about (or choose) assets held, not where the beneficiary gets to be anonymous.
That Reddit post is funny to read from a 2020 perspective. Things aren't so stable now.
If I won a $600MM prize, a good chunk of the $30MM spending money would go to buying a large property in rural southern Utah with good water rights, quality arable land that can grow fruits/vegetables/hay, an airstrip, and a defensible position. I would stock it with lots of food, guns, ammunition, and diesel fuel.
Having that retreat is as important to me as having the $30MM in treasury notes.