For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more smiths1999's commentsregister

I always thought "serverless" referred to AWS Lambda style services. Recently I saw it used to refer to cloud computing. Is this a common interpretation? It really took me by surprise.


I've always seen it to mean cases where the code owner is not responsible for designating or managing individual servers, but instead they just deploy their code and the cloud provider is responsible for scaling based on incoming load.

In that model things like AWS Lambda and Google Cloud Functions are considered serverless, but so are Elastic Beanstalk, App Engine and Cloud Run.

Managing your own individual servers in something like EC2 or Cloud Compute is definitely something I would not consider serverless, and I think most people would agree with that.


I teach at the university level and this is what I have done with my exams. Everything is open book and in my experience there is no difference in the average exam score between open book online and closed (or open) book in person. It is also way easier for me to not have to worry about who is cheating and where everyone is looking (I also don't like the idea of forcing students to turn on their webcams).


I finished up the coursework for my masters degree at the start of the pandemic. My university was quite flexible for how instructors would examine us, given how sudden everything had to change.

One of my courses, which only had about 8 students and two instructors, decided to do an oral examination, which ended up being basically a very in-depth, one on one conversation about the course material and based on the expectations set in the syllabus (so, no surprises).

While obviously not practical for large rosters, this was by far the best exam format that I have ever done in my many, many years of schooling. I'm sure not everyone would prefer it, but the students unanimously agreed to try it (wouldn't have done it that way otherwise), and it was just so great. It was not at all like an oral thesis defense, which was what I was a little worried about.


When I was an engineering student at the university of Naples, all my courses were examined both with a written exam and an oral one. No exception, no matter how many students. It was hard for us and the teachers but, boy, you had to really study that stuff! Since then I've become an academic myself and have been teaching in several countries. I have never found the same level of rigor in any place I've been.


I remember friends in the '80s studying law in Turin always having orals - as you say "they had to know their stuff"!


Italian universities get many things wrong, but in terms of quality of graduates I reckon they used to be really good. The downside was having a high number of drop-outs (of which I am one). They've gone through umpteen "reforms" in the last few decades so I don't know how they do these days.


This is how exams are in Russian universities. You walk in. The table has a number of small paper cards on it face down with topics the course covered. You pick one at random, flip it over, and that is the question you need to answer. Since you do not know up front which you'll pick, you need to know all the material. Since you only need to answer one question, professor time is saved and exam throughput can be quite high.

Professors are also given quite a lot of flexibility in their grading. My mother had a fun story about a professor she had in college - a professor of a really hard math class who wanted to save on exam time. He announced "exam will be hard. Anyone willing to settle for a D, bring your report cards forward, I will mark them D and you can leave. No exam.". Some people came forward, got their Ds marked, and left. Once the door closed, he said "Anyone willing to accept a C, please come forward". Some did. After the door closed there, he announced to the remaining smiling students expecting easy As/Bs: "I'll see you all for the exam tomorrow 8 am".

No way this could happen in USA.


> My mother had a fun story about a professor

I've heard that story many decades ago in the form of a joke. It may have started from a professor who genuinely didn't care about failing students but did care about identifying the best.


> decided to do an oral examination,

This really is the best case, but as you note it was 8 students so quite manageable.

It requires a little skill on the part of the examiner, but you can quickly find out how much material the student knows with much higher accuracy than other exam formats, in my opinion.

One of the skills needed is to be able to make it conversational-feeling and reduce the anxiety of students. You can often tell when a student mostly knows what is going on but has misstated or misremembered something, and guide them around the place they got stuck.


Orals have a lot of advantages, but they also make it very easy for unconscious bias to come into play, in that all the criteria for grading are soft.


Good point, this is also one of the aspects of skill. There are techniques you can use effectively to mitigate this.

One unfortunate thing is that poorly done, orals can be very uneven.


There already is unconscious bias. You can see the student's name, their penmanship, their writing style, you likely know who they are, etc. An oral exam just changes things by changing the bias to accent, inflection, annunciation, skin tone, dress, etc.


When I was in school, 95% of grading was blind and nothing outside of exams was handwritten.

And while it was possible to de-anonymise, the academics were all in support of blind grading so why would they?

The only exceptions were projects where everyone was assigned a different topic, and graded presentations.

Of course, this is simpler in STEM subjects - it's not like you can guess someone's race or gender from their switching power supply design. Subjects that prize in-class participation and lengthy essays would probably be harder to blind-grade effectively.


This also depends quite a bit on class size. If you are one of 9 profs and 36 TA's on a whole year of a 1st year intro course, you can get together and batch mark finals very effectively blind.

If you are teaching a 4th yr/masters mixed class of 11 by yourself, you pretty much get to know who is who whether you want to or not. I suppose avoiding handwriting can help if it's appropriate (e.g., won't work on a math course) but I suspect you'll know everyones style by then anyway.


The examiner doesn't even have to use the oral exam to give a grade. They could use that part simply to figure out whether the student passes or fails. It's very difficult to cheat in an oral exam. Combine the oral exam with the written exam and you could get an overview of what the student knows.


This privileges confident speakers. (The same way written tests privilege confident readers, and standardized written tests privilege those who have the time and resources to study the standard.)


I'm not sure about that.

It certainly could, but it also certainly couldn't. I imagine being confidently incorrect is likely to produce a worse result than being unconfidently(?) incorrect, for example.

Similarly, a less confident speaker may end up spending more effort justifying their answers, which could better expose their knowledge.

I think it would depend quite a bit on the examiner in this case. Some people may even be simply biased against particularly confident speakers, particularly considering the relative positions of the speaker and examiner.


It doesn't if the examiner understands the material. You can't bullshit someone who knows much more about a topic than you do - bullshitting with confidence will only make you sound like a fool.

If the examiner isn't much more knowledgeable about a topic than their students, then something else has gone wrong.


But you can appear to know less than you actually do through a lack of confidence.


Right, but that's why it takes a bit skill on the part of the examiner - you need to be able to support people through their nerves and lack of confidence.


Not much if you do it well. Confident and wrong won’t get you far, either ...


The real world also privileges confident speakers.


Schools should be teaching kids to be confident speakers and readers from an early age.


If teaching worked 100% we wouldn't need exams.


> and it was just so great.

I'm delighted to hear that it went so well, and I am a believer in the idea. I have seen, from time to time, oral thesis defenses become rather tense and difficult, and think that things go better in proportion to the preparation of both student and examiner. Any general observations about what worked, for those contemplating giving exams in this fashion?


Probably the most important thing that made it a good process was that, although I was certainly under pressure to perform and show that I knew and could explain the material, the format was as a thoughtful, two-way, deeply-engaging conversation rather than a grilling, one-sided examination.

The examiner (one of my two instructors) had a list of questions/topics that we had to get through, but the specifics and flow were natural and spontaneous rather than artificial and forced.

What made the conversation good was that the examiner discussed points that I raised, raised points that I didn't, asked my opinion on e.g. real world implications of theories or conclusions that could be drawn if this or that were true, et cetera. This made it two-way and engaging. While he did not give me any answers, of course, when it was clear that I could bring up and sufficiently discuss a topic, then he would go into details, which would trigger even more detailed responses from me, and so forth. In this way I think he was able to probe the depths of my understanding while not needing to employ a one-sided question/answer format.

I think that it is difficult to bullshit a topic in depth with someone else who knows what they are talking about, so an oral exam probably does not need to be a hardcore opposition like a thesis defense might be.

The allotted time was maybe 30 or 45 minutes (I don't quite recall), after which the examiner would tell you what grade they would recommend. If it was a grade with distinction, then of course you would say thanks and be done. If it was less than distinction then you could request another 10 or 15 minutes of further examination to try to show your mastery. (I passed with distinction, so I didn't go through that part, but I assume it would have been a continuation? But maybe it would get a little more intense if you were trying to improve your performance at the last minute? I don't know.)


My experience with the oral exams have been the opposite. The profs ask useless questions on details that do not matter at all and are never covered in a real exam. They are also very biased in how they assess you based on their preconceptions. And it doesn't give you the time and privacy needed to solve real technical questions. (Most of my written exams were only 4-5 questions in 3 hours, which is obviously not happening in an oral exam.)


In my alma mater for most courses we had to actually discuss what we wrote(and maybe some other topics) with the lector as part of the exam. Some were open book, some were closed. Regardless of that however when you get to talk to the professor it's very hard to fake knowledge and especially logical thinking. It sure takes more time(we usually had one on one for about 20-30 minutes per student), but you can size a person way better than just reviewing what he wrote/selected. And there is no obstacle(other than time constraints, which can be worked around) to continue doing this even during the pandemic. I think we are overestimating technology such as AI and its applications. Some things are just plain better when done by a person. I had a very positive experience with one professor who chose to round up my grade instead of down, because my errors were technical(carelessness mostly) not logical, so he then asked me questions from subsequent chapters to see my reasoning. I really didn't care that much about the grade, but that discussion and the overall experience of failures that could lead to positive change through introspective analysis is something that you can't get from an AI or any pipeline processing.


Thank you! I'm sure it takes more work to make an open book exam, but it's definitely to the benefit of your students.


If you can't see them, doesn't that open up the possibility of students taking the exam together?


I give regular online exams without any protoring software, pure honor system. I have three students who live together who consistently turn in identical work. What I don't understand is why the keep doing it, when I keep giving them 0s for copying.


How did you find out they live together?


Not every university / university class is big. Most likely explanation is that they simply said they live together.


Ah, that'd make sense. I would just think that they wouldn't mention it if they planned on cheating lol.


Back when I worked first line at a University, I had access to the student records system (so that I could verify students if they required a password reset).

I was able to see current and past residence data, which I suppose could have helped me verify the student's identity ("what post code[0] do you currently live at?"), but I tried not to look at that information when possible.

Lecturers also had access to this system (it dealt with some submission data), so I guess they may have had access to view residency data?

This is just an educated guess based on my experience, though.

[0]: For American readers, think ZIP code


Uncontrollable stupidity apparently. Where I went to school you would have been called to the Dean on the first event and perhaps out on academic probation if they didn't like your answers. Second time would have been academic suspension

Crazy stuff.


Presumably they have some kind of recourse as to their grades.


I'm sure many will come down hard on my comment and disagree. But speaking as someone who teaches at a university and also works in industry and is involved in hiring, I don't think becoming an expert in git is worth your time. At this stage in your career you should spend your time mastering algorithms, data structures, and a compiled language like Java or C++. I would put emphasis on learning how to use your language of choice idiomatically (e.g., iterators, streams, the standard library and its core data structures, etc.). In my experience, the best way to do this is practice Leetcode every day. Doing one question a day (a 30-60min commitment) will put you leagues ahead of your peers. Combine this with reading a major book on your language (e.g., Effective Java or Modern C++ Design, etc.)

Without getting sidetracked about the merits of the technical interview, it is current a fact of life. In my experience most undergrads struggle solving even the most basic problems and even if they come up with a solution, they are unable to code it in any language of their choice. If you are coming out of university as a "git expert" and can't code up a basic solution, you will get passed over every time.

Most teams (at least in my experience) are not struggling to solve git problems (although they certainly pop up). So while you could add some "value" there, overall you aren't adding a whole lot of value. On the other hand, if you know your language and are a moderately competent coder, you can add a lot more value.


I'm pretty sure (well hoping) that all of the advice to go deep into one thing would always be in addition to actually being able to code. I completely agree with you that understanding git very deeply is not needed or useful to the level of detail suggested here in other comments.

The level of understanding of git you need to stand out is very very limited in my experience. Most devs in "regular" companies struggle to understand even the basic rebase. Even on a logical, abstracted level. Never mind how and why it actually works as well as it does.

The types predicaments I see people get themselves into even with git vs say subversion is mind boggling. I have never gotten myself into a situation that wasn't resolvable by simply making sure that everything I try is done after committing my changes. You can just always go back and retry. And just slapping a label (branch in git speak, sure) on a commit before force pushing after that rebase with lots of conflicts so that I have a backup. And even that is not strictly necessary. I've fucked up conflict resolution only to notice when the build server tells me and I had to go find a commit hash in my terminal output somewhere to resurrect it (I guess I was lucky I didn't hit an auto cleanup of dangling commits in between ;))


It wasn't clear to me that the other comments were starting from "know how to code," which is why I made my comment. If OP spends time honing their coding abilities then by all means learning more about git is a great plan.

I am really surprised by your comments about git at "regular" companies. If all we are talking about in this chain is understanding workflow and how to rebase, cherry pick, etc. then I completely misunderstood the discussion.

I have certainly gotten myself into some hairy situations. Since I avoid making massive commits, if things get too bad I have always been able to quickly resolve the issue by just doing a clean clone somewhere else and moving my changes over. As a last ditch effort it works quite well and does not take too much time (or stress :) ).


Oh I'm sure some other people took the discussion on various different ways. Such is communication between humans ;)

Btw. in case it helps you. No fresh clone in a different place needed. I think I know what kind of situation you mean and all you need is to cherry pick your commit on top of the branch you want instead of doing that merge/rebase that isn't working out. Takes even less time than cloning somewhere else and moving your changes over.

And in some cases what this sort of situation really just needed was an interactive rebase that just skips the appropriate commits that already happened on the main branch. Suddenly a litany of seemingly unresolvavle conflicts doesn't even exist in the first place. Many ways leading to Rome there.

I would encourage you to always work from just within exactly one repo with git. The whole "having another copy of the repo somewhere else" is something I have seen so much with svn but it just really isn't required with git at all. And even if you have to "do a fresh clone" you really don't have to. Just get rid of all files (rm -rf) except the .git directory (or copy it where you need it) and checkout again. I've used that a few times when I was having build issues and I wanted to make absolutely sure there were no generated files from either my IDE or the local build left anywhere that could screw things up.


Pretty sure your getting downvoted because this sounds like the advice of a professor who hasn't spent much time in an industry setting: grind algorithmic problems to succeed, only to find out past the interview that knowledge rarely gets used.

Ive managed to not have to do a technical interview for all of my internships and jobs so far, largely due to my efforts networking and focusing on learning popular technologies (especially React). Ive done the theoretical coursework and enjoy the problems, but the hour-a-day leetcode would not have been nearly as useful as learning a popular library and building connections.

And that said, I'd say git problems are the norm especially with newer devs. Having that one person on the team who is a git-master is invaluable when you've made a mess.


Hmm - that is surprising since I specifically mention I work in industry (and have for quite some time).

It sounds from your post like you may do front-end work. I work primarily on distributed systems for machine learning, so I can't speak to front-end work, but in my experience understanding basic data structures and algorithms is quite useful in day-to-day work. It is great you have gotten to where you are without doing technical interviews. On the other hand, every job interview I have had has had multiple rounds of technical interviews.

And I should also make clear that I am not saying become a competitive programmer or a Leetcode expert. For example, there isn't much value in looking at dynamic programming style problems unless you are interviewing at a top company. But spending 30min to an hour a day on easy to medium level questions will definitely sharpen your reasoning about algorithms and data structures. And like I said, in my experience those are used very often in day to day work (at least on the backend side of things). Also, to clarify, I am not saying you should do this forever.

As an anecdotal example, I recently reviewed a PR that had a lot of complex if-else statements that was dramatically simplified through the use of a set. The updated code was easier to read as well as understand. When I pointed this out, the engineer agreed and understood what was going on. But their initial instinct was to use the one tool they knew: arrays and chains of if-else statements. This is the kind of skill I am getting at - knowing enough about your language, data structures, and algorithms to know when to use the tools in your toolbox.

I don't think it is unreasonable to expect a software engineer to understand the difference between tree-based and hash-based data structures, when you should use arrays, and pros/cons of linked lists, etc. Practicing this kind of stuff, which is very easy to do in Leetcode, is the fastest way to build this intuition (at least in my experience with distributed systems).

Edit: just wanted to add that understanding these things makes it extremely easy to reason about systems like Redis, memcached, Cassandra, Kafka, etc. If you understanding the basic, then you start having these moments of clairty thinking "oh this is just a big hash table!" etc.

Also, meant to add that thee repetition of Leetcode style questions is super valuable in learning the APIs of your language. Things like "how do I create a hash table? how do I populate it? How do I check if it contains a key?" This is all simple stuff but a lot of new grads aren't as familiar with the APIs. It's not a big deal, sure, but it is also an _incredibly_ easy way to stand out.


I think you're definitely right about how taking time to practice with algorithms and data structures using tools like Leetcode can give you that nudge of intuition that helps you reason about other systems.

In fact, I had a similar "simplification" moment during the summer after practicing Leetcode for a few weeks. There was a longer than needed function that returned a list of unique items, and realizing the intent of the code, I simplified it to a one-liner using built-in data structures. Small moments of recognition like that feel great!

Your point about becoming more familiar with your language's APIs is also a great one. After the aforementioned Leetcode practice, I didn't have to look up things half as much as I did previously about Python. It was a good feeling :)

Unrelated to the above anecdote, I recently started following the Backend Engineering Show with Hussein Nasser [0], and he makes a lot of these kinds of connections to popular technologies like you described.

EDIT: forgot to add the link!

[0]: https://open.spotify.com/show/55pPBm0l75K28dIqoHIQIc


So for technical interviews, yes you should absolutely grind algorithms, and not just any algorithms, but exclusively the stuff that comes up in leetcode, and yes you should do weeks of daily practice at leetcode or hackerrank if you want to get hired at a FAANG (or many other places). Up to a certain level getting better at leetcode problems will even improve your actual programming skills (BTW: I would be extremely interested to hear about any data concerning this improvement).

Beyond bandwagon effects I suspect these tests originally became so popular because they are an unproblematic way to select for people who are highly intelligent but also do as they are told.

This also points to a potential shortcoming of the "grind leetcode" strategy: it is applicable to the extent you meet the above characteristics and even if you do well enough to commend yourself as a high-spec corporate cog, and get that FAANG job, it does little to differentiate yourself against other high-spec corporate cogs (unless you are one of the tiny top fraction of competitive coders, maybe).

So with the exception of mastering algorithms (as taught in uni beyond leetcode needs) or mastering C++ or Java[+], I wholeheartedly endorse your advice, but becoming really good at some suitable and economically useful X is, I think, a more widely applicable strategy with a higher ceiling.

Also, I find it interesting that although I explicitly mention that my idea of learning git deeply would involve the ability to implement it (rather than memorizing the man-pages, say, and presumably validated by actually taking a stab at it) a few people pointed out that git is essentially too pedestrian to be useful. This is not the case, in my experience: both in that understanding git well will go way beyond the antiquarian and also in that lacking git skills are in fact a productivity drain at many or most companies.

[+] Algorithm courses at uni seem to gravitate towards what's neither interesting nor useful. Of course, go a head and concentrate of C++ or Java if your interests require it (e.g. Games programming). Otherwise, I suspect the best languages to master early are either python or javascript. If you care about machine learning or science, master python, if you care about web and app development or graphics, master javascript; otherwise pick the one you like better. Either will deliver much better bang for the buck in terms of skilling up and being able to do interesting things in a short amount of time than Java (which risks pulling you towards enterprise antipatterns unless you have good guidance) or C++ (which requires mastering an enormous amount of antiquarian knowledge to get anything done). I would maybe complement this with learning enough C to be comfortable with heap, stack and pointers and enough Rust, Ocaml or Haskell to have a glimpse at what a language with a proper type system looks like.


Integrating rg and fzf into vim changed my programming life. I find that combo pretty incredible and something I haven't found in IntelliJ. I usually find myself switching back to my shell and then back to IntelliJ.


Yes, I probably should have said "use ag in the context of emacs" just to clarify how it gets used.


I am not a "TLA+ promoter" but think it is a very valuable tool for anyone building distributed systems. The value of TLA+ is that it forces you to carefully consider your algorithm, which is certainly important if the algorithm is complex but equally important if the algorithm is simple. Most people will struggle to correctly specify even a simple algorithm in TLA+ because they will miss a lot of things they had assumed without ever thinking about.

Real world systems need to handle all the things you mention. TLA+ helps you consider all these issues with spelling them out individually. There is no point in building a complex system if you haven't taken the time to validate the correctness of the target system in the first place.


Doesn't blaze build remotely via citc? I thought files were loaded locally for editing but built remotely.


Blaze itself runs locally, reading BUILD files from citc. The build actions mostly execute remotely.


I don't disagree, but this goes both ways (and I typically only see one side of this argument put forward). They also allow the person who designed the types to make the types so complex that the average user cannot understand what is going on. You may argue "well that person has no business looking at the code" but sometimes you have to look into other people's code when you are debugging an issue.

No doubt type systems can allow you to write safer, more robust code. But they can also allow you to introduce new risks to your codebase, one of which is difficult to read/understand code.


They are probably thinking of simply going through the list and tracking the top 3. This gives you linear time but as you noted you can go faster with quickselect (edit: I originally mistakenly wrote selection sort despite thinking of quickselect), giving you linear time (edit: I original said logarithmic time, which is obviously wrong).

I suppose it is besides the point, but you shouldn't be reaching for a specific algorithm but rather using your language's library sort algorithm.


What do you nean with selection sort giving logarithmic time? I mean any algorithm finding anything unique on an unsorted list has to be at least O(n) or resort to magic.


Well I was actually thinking of quickselect, but regardless I was too quick to post. Quickselect is O(n) average time (as you pointed out).


This has happened to me several times, albeit at a much smaller scale. I fire up a few GPU instances for training neural networks and when I got to shut the instances down I forget that you always need to refresh the instance page before telling AWS to stop my instances. I still go through all the confirmations saying I do, indeed, want to stop all instances. However, these few times I forgot to refresh to make sure they actually were shutting down and simply went to bed. Not an $80k mistake, but certainly a couple hundred dollars, which hurts as a grad student.

Now I have learned, _always_ refresh the page and instance list prior to shutting anything down and _always_ confirm the shutdown was successful.


I expect the 13" will eventually move to a 14" similar to the 15 -> 16 change.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You