There is a funny notion that professors work for students. They don't. They hardly even work for the university. It is more correct to say that they are affiliated with the university.
At top universities, professors bring in grant money or else they lose their job. The university takes a portion of the grant before any of the money is spent. Then, the professor pays the university from the remaining money for their graduate student's tuition and stipend, to use university technology, and rent for research space. Next, the university only pays professors for 9 months out of the year. So, another chunk of grant money finances the professor's salary for the remaining 3 months.
Top universities scrape billions off research grants annually. These grants are won by faculty members. They dwarf salaries. Faculty members bring in more cash to the university than their salaries are worth.
In reality, professors finance the university. Students are mistaken if they think they are their professor's employer.
I am unable to reproduce work in CS most of the time. In part, because technology moves so fast. You can’t earn a PhD or tenure by maintaining open source software, so after a few years most implementations are rotten. Research software rots faster than other software because it’s built by people who aren’t professional engineers. Code quality doesn’t earn PhDs or tenure, either. We also use expensive new technologies with changing interfaces. So research software almost always rots.
Also in part because papers are “dressed up.” That’s a polite way of saying that the description of what they’ve built has been stretched. Authors need to make reviewers excited. They do this with ideas, not code. The purpose of their code is to show that the ideas could be practical, not that they are. So research software is almost always incomplete.
It's not necessarily about the machine's raw performance, but capacity. The compute is a shared utility, and the utilization of supercomputer is quite high. We expect science to grow exponentially. So, compute demand grows exponentially too. The cloud is crazy expensive compared to supercomputing -- you don't want missing compute to hold back research.
If science grows exponentially (rather than linearly or sublinearly), you're screwed.
I don't think the cloud is expensive compared to supercomputing- so long as you consider TCO and you're doing proper deal negotiation. I would much rather than an evergreen cluster in the cloud with fast interconnect that I turn on for myself and shutdown when I'm done, than share some resource and try to keep utilization high.
But the reward (i.e., meaning) is defined by a human engineer. The point of the article is to say that we don’t have AGI yet because today’s AI needs humans to ascribe meaning to its decisions.
Agree. Instead of "meaning" I might just use "desire". The AI is trained to meet the human engineer's chosen desire. The engineer specifies this desire through a loss function (supervised learning) or rewards (reinforcement learning). Two more points:
1. This "desire" applies only to the training algorithm.
The deployed AI (running in inference) is no longer trying to minimize a loss function or maximize discounted rewards.
2. This "desire" is not a "deliberate desire".
It is more like an "animal desire" such as wanting to eat because of hunger. Aristotle draws this distinction. Such a desire is automatic (rather than deliberate) and is not part of choice-making.
For base desires, that 'agent' is evolution. E.g., we can't will ourselves to stop breathing for the next hour, no matter what our higher-level desire might be.
Many of our higher desires are set for us (taught) by other humans: our parents and society. We wouldn't want an AGI that was able to set all of its own desires, any more than we'd want that in our children. Rather dangerous indeed.
Depends on the LSP client you're using. Eglot doesn't do a whole lot to manage them, and leaves it to the user to install language servers they need. lsp-mode will find and install language servers if needed.
LCGs are multiply followed by add followed by modulus, which is usually implied by a mask or a native word bit length rolling over. You should observe that multiplies have patterns in the lower digits which an add will only offset, and the modulus will only throw away high bits rather than mix them back in to the low bits. Consider the low digit in multiples of 7 (what you'd get picking a number between 0 and 9 inclusive via modulus):
The lowest digit has the sequence 7418529630 repeating, which isn't very random. Modulus preserves low order bits and throws away the magnitude of the number. The result is that if you want to get half-decent low-valued random numbers from an LCG, you should take x/range * limit rather than %. You can do this with integer arithmetic via multiply and shift if you have an integer twice the size of your LCG.
But again, don't use an LCG for generating your password.
[1] I looked at the source. Bash looks like it tries to use random(3) if it's available, otherwise it seems to use this, which doesn't have an add:
GCJ had compatibility issues. It never supported Java 1.5. It also couldn’t handle static linking properly. For a large-scale project it usually wasn’t an option.
I think the real key is that Apple has recognized that App Store delays are a problem and has taken steps to quantifiably improve the situation. See https://appreviewtimes.com/. Anecdotally, the first version of one of my Apps was approved in < 8 hours. On another, more gray area app, it took ~1.5 weeks. Gone are the days of 4 week update delays. I’ve found that Apple’s release process has identified useful problems in my apps too.
+1 on Firebase. I recently helped a buddy build an app and decided to give it a try. Holy crap. All we had to was build the app! significantly faster than either of us estimated.
Be careful with this, especially with mobile apps that can't be easily updated. Using Firebase has a tendency to put what would otherwise be backend logic into the frontend, which can make it much harder to make changes quickly once your app has users.
No. That's not it. C-h t isn't really interactive. It's just a tutorial you can edit. The program I remember was a typing tutorial just for key binding.
At top universities, professors bring in grant money or else they lose their job. The university takes a portion of the grant before any of the money is spent. Then, the professor pays the university from the remaining money for their graduate student's tuition and stipend, to use university technology, and rent for research space. Next, the university only pays professors for 9 months out of the year. So, another chunk of grant money finances the professor's salary for the remaining 3 months.
Top universities scrape billions off research grants annually. These grants are won by faculty members. They dwarf salaries. Faculty members bring in more cash to the university than their salaries are worth.
In reality, professors finance the university. Students are mistaken if they think they are their professor's employer.