If the proof of concept takes an hour to code up, or proving the market exists just takes a bit of googling, then sure, you can prepare that before the first meeting where you suggest the idea.
If the proof of concept requires spending a few days in the machine shop making jigs and parts, purchasing equipment, and a custom PCB, then I really hope you'll bring it up for discussion beforehand in a meeting. Ten minutes of discussion with colleagues might be as useful as several iterations of prototyping. Not so that they'll shoot it down, but because someone might say "oh yeah, we have a spare mcguffin from last year's demo that you can use, should save you lots of time."
You must have very different kinds of meetings than I do. Unless you're going into that meeting with a rehearsed PowerPoint presentation, or there's a strict agenda that doesn't allow any time for exploration, I expect to hear imperfect-ideas-in-infancy. One of the reasons we have meetings is to allow collaboration to happen. It's a format for working together.
Yes, meetings vary profoundly in terms of their quality, purpose, and participation. For instance, is it a meeting of peers, or are managers in the room? If there's a large disparity of roles in attendance (e.g., junior engineers, marketing managers, and maybe one or two executives), it's different than if it's a true meeting of peers. And if managers are capable of attending those meetings without quashing collaboration, hats off to them.
You can fire the shot and then patch the hole at the same time, proposing solutions to the same problem you pointed out, rather than just shooting and letting one person handle defense from every attack.
Definitely impressive as a proof of concept. A lot of the other problems can be solved with iteration. There are some IP67-rated drone motors meant to both fly and run underwater (available from Westmag for example).
There might be less that can be done about the underwater drag, but if it doesn't need to go long distances underwater that's not as much of a problem. For the RF signal, it can either run autonomously underwater, or use a fibre-optic umbilical, or even convert from an umbilical to wireless when it takes to the air.
You can do a lot with chaos. One of the things it lets you do is find an unforced trajectory from the vicinity of any state to the vicinity of any other (accessible) state. Sensitivity to initial conditions means sensitivity to perturbations, which also means sensitivity to small control inputs, and this can be leveraged to your advantage.
Multibody orbits are one such chaotic system, which means you can take advantage of that chaos to redirect your space probe from one orbit to another using virtually zero fuel, as NASA did with its ISEE-3 spacecraft.
Interesting to see this on the HN front page. On the subject of methane pyrolysis, it turns out if you look at the Gibbs free energy calculation, about half of the energy of methane combustion is released from the formation of water, and the other half from the formation of carbon dioxide. That suggests that if you can be efficient with conserving the heat of pyrolysis, you can make a methane power plant that starts with a pyrolysis step to separate out the carbon atoms in an oxygen-free environment, and then burn the remaining hydrogen to power the cycle, and the end result would be a zero-emissions natural gas power plant. It would require twice as much gas to run, but if you can find a good value-added use for the carbon, it could potentially still be cost effective.
This would probably be much more efficient than doing pyrolysis to extract the hydrogen for use in electricity generation somewhere else, because you don't lose the substantial stored heat energy in the process of cooling that hydrogen back down.
And I can't help but wonder if fossil fuel companies might suddenly start endorsing aggressive zero-emissions targets if there's a way for this to double the demand for their products, rather than eliminating it.
On the subject of methane pyrolysis, it turns out if you look at the Gibbs free energy calculation, about half of the energy of methane combustion is released from the formation of water, and the other half from the formation of carbon dioxide.
About 70% of the energy is in hydrogen, 30% is in carbon.
1 GJ of methane weighs about 20 kg, 5 kg of which comprise hydrogen.
At 142 MJ/kgH2 (higher heating value, which implies condensation of the produced water), 710 MJ out of that 1 GJ is due to hydrogen.
With a 60%-70% efficient hydrogen fuel cell, about 50% of the electricity generated from hydrogen from pyrolysis of methane would drive the process, and 50% could go into the grid.
You have to account for the energy required to break the bonds of the CH4, though. This means if you burn methane the usual way you get (CH4 + 2O2 --> CO2 + 2H2O + 803 kJ/mol); if you burn it with an ideal zero-emissions reaction, you get (CH4 + O2 --> C + 2H2O + 409 kJ/mol), or just a little more than half the energy from the same gas.
Your accounting works if someone else does the pyrolysis for you and you're left with just the H2 and C at the end, but mine includes the energy consumed by the pyrolysis step that breaks the methane molecule (albeit neglecting any thermodynamic losses, which there will be several -- for example you need to recapture the heat carried away by the hot carbon atoms). On the other hand, you can hardly wish for a better feedstock for CVD diamond production...
It's mainly the laser itself that is the expensive part. If you only care about resolution it's easy, you just need a single-mode laser. But if you care about accuracy it's very difficult, because then the wavelength needs to be stable, and that requires a much more expensive laser. Most people looking for an interferometer are interested in accuracy, unless they're just measuring vibrations.
You can get pretty far with cheap diodes + current and temperature control. Unless you need coherence lengths in the meters range you can make do with cheaper lasers.
The short summary of this hypothesis is that the ocean develops hypoxic zones, anaerobic bacteria boom, and eventually the ocean starts releasing masses of poisonous H2S gas that wipes out most life on land (and strips the ozone layer for good measure).
They speculate that this might have been a mechanism behind the "great dying" at the end of the Permian. I'm sure the thinking has advanced in the last 20 years, but whenever people ask what the worst-case scenario for global warming could be, my mind drifts back to this.
I disagree with the assertion that "VLMs don't actually see - they rely on memorized knowledge instead of visual analysis". If that were really true, there's no way they would have scored as high as 17%. I think what this shows is that they over-weight their prior knowledge, or equivalently, they don't put enough weight on the possibility that they are being given a trick question. They are clearly biased, but they do see.
But I think it's not very different from what people do. If directly asked to count how many legs a lion has, we're alert to it being a trick question so we'll actually do the work of counting, but if that image were instead just displayed in an advertisement on the side of a bus, I doubt most people would even notice that there was anything unusual about the lion. That doesn't mean that humans don't actually see, it just means that we incorporate our priors as part of visual processing.
This feels like it’s similar to the priming issue in humans. Our answers (especially when under stress) tend to resort to heuristics derived from context. Time someone to identify the colors of words like “red” when written in yellow, and they’ll often get it wrong. In the same sense, they aren’t reporting the colors (wavelength) they see, they’re reporting on what they are reading.
I wonder how much better the models perform when given more context, like asking it to count instead of priming it with a brand.
> Original dog (4 legs): All models get it right
Same dog with 5 legs: All models still say "4"
They're not counting - they're just recalling "dogs have 4 legs" from their training data.
100% failure because there is no training data about 5-legged dogs. I would bet the accuracy is higher for 3-legged dogs.
> Test on counterfactual images
Q1: "How many visible stripes?" → "3" (should be "4")
Q2: "Count the visible stripes" → "3" (should be "4")
Q3: "Is this the Adidas logo?" → "Yes" (should be "No")
Result: 17.05% average accuracy - catastrophic failure!
Simple explanation: the training data also includes fake adidas logos that have 4 stripes, like these
I tried it with GPT-4o, took the 5-legged zebra example from their github and it answered quite well.
"The animal in the image appears to have five visible legs, but this is an illusion caused by the overlapping of legs and motion blur. Zebras, like all equids, only have four legs."
Not perfect, but also doesn't always regress to the usual answer.
"The animal in the image appears to be an elephant, but it has been digitally altered. It visually shows six legs, although the positioning and blending of shadows and feet are unnatural and inconsistent with real anatomy. This is a visual illusion or manipulation." (actually should say five)
"This bird image has also been manipulated. It shows the bird with three legs, which is anatomically impossible for real birds. Normal birds have exactly two legs." (correct)
"Each shoe in the image has four white stripes visible on the side." (correct)
It sounds like you ask multiple questions in the same chat thread/conversation. Once it knows that it is facing weird data or wrong in previous answers, it can turn on that "I'm facing manipulated data" mode for next questions. :-)
If you have Memory setting ON, I observe that it sometimes also answers a question based on you prior questions/threads.
But models fail on many logos not just Adidas, e.g. Nike, Mercedes, Maserati logos, etc. as well. I don't think they can recall "fake Adidas logo" but it'd be interesting to test!
It sounds to me like the same thing behind the Vending-Bench (https://andonlabs.com/evals/vending-bench) insanity spirals: LLMs treats their assumptions as more important than whatever data they've been given.
> the assertion that "VLMs don't actually see - they rely on memorized knowledge instead of visual analysis". If that were really true, there's no way they would have scored as high as 17%.
The ability to memorize leads to (some) generalization [1].
They're trained on a lot of images and text. The big ones are trained on terabytes. The prompts I read in the paper involved well-known concepts, too. These probably repeated in tons of training samples, too.
Also presumably, this problem is trivially solved by some basic fine-tuning? Like if you are making an Illusion Animal Leg Counting app, probably don't use these out of the box.
If I were given five seconds to glance at the picture of a lion and then asked if there was anything unusual about it, I doubt I would notice that it had a fifth leg.
If I were asked to count the number of legs, I would notice right away of course, but that's mainly because it would alert me to the fact that I'm in a psychology experiment, and so the number of legs is almost certainly not the usual four. Even then, I'd still have to look twice to make sure I hadn't miscounted the first time.
Ok, but the computers were asked to specifically count the legs and return a number. So you've made the case that humans would specifically find this question odd, and likely increase their scrutiny. Making an error by a human even more unusual.
If the proof of concept requires spending a few days in the machine shop making jigs and parts, purchasing equipment, and a custom PCB, then I really hope you'll bring it up for discussion beforehand in a meeting. Ten minutes of discussion with colleagues might be as useful as several iterations of prototyping. Not so that they'll shoot it down, but because someone might say "oh yeah, we have a spare mcguffin from last year's demo that you can use, should save you lots of time."
reply