Hacker Newsnew | past | comments | ask | show | jobs | submit | brk's commentslogin

Nothing works perfectly in all conditions and scenarios. Sensor fusion has been the most logical approach now, and into the foreseeable future.

Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.

High performance fully automated driving that relies solely on vision is a losing bet.


For marine applications dual drives are common as it enables better rotational control for maneuvering. The redundancy aspect is also a factor, but moreso for applications where you are going to be far from shore. For tugboat and ferry type applications, where these drives are most common, that is less of a concern.

For the most part the Sharrow props have not proven to be much of an improvement, particularly for the high price.

The tests that have shown "significant" improvements have frequently compared the Sharrow to a sub-optimal prop. Feedback from many actual users is that the gains are moderate over a narrow RPM range.


Do you have any more information/sources to share on this? I have an Eastern 18 powered with a Yamaha 60hp 4 stroke and I've been struggling to dial the prop right. I don't know off the top of my head what the specs of my current prop are but basically I feel like I'm not taking advantage of the engine's torque at less than WOT, so I basically just run it flat out. If I could extract just a little more thrust out of the prop at lower RPMs it feels like the engine would have enough grunt to make the boat plane in the mid-high 4000s instead of 5000-5200rpm where I currently run it. Ideally, given the bsfc/hp curves, I'd like to run the engine at a bit lower RPM, but the way it's currently set up at ~4600rpm it's not fully up on the step. I was (perhaps wishfully) thinking a little more efficient prop design might help.

The other thing I was thinking of trying is swapping in a different "high torque" lower unit with a lower gear ratio and running a significantly larger prop.


Sources are primarily boating forums, dockside conversations, etc.

In theory your boat in right in the sweet spot of recommended power range at 60HP. I don't know all the background on it, so all kinds of potential problems, but I would wager that "propped wrong" is unlikely to be the core culprit.

I'd start by getting it weighed and comparing your loaded weight to manufacturer specs. USCG requires positive buoyancy for hulls under 20'. This is typically achieved with using expanding foam in hull cavities, and that foam can have a tendency to absorb and hold water if the boat develops any failure of the seals around the bilge areas that are foamed. Reports of poor performance are very common for these sub 20' hulls because of waterlogging. If not a waterlogged hull, you might also just have too much stuff on-board.

To a lesser degree, a bimini can also have an adverse affect on speed/planing, if it's acting like a parachute. Not sure if you have a bimini, but if so it's worth trying a run with it up vs. down.

I'd also look at how your outboard is mounted. It's not clear if it the outboard from the factory, or if the boat has been repowered. Outboards being too high, too low, etc. are pretty common issues that can also majorly impact performance.

That's a few thoughts that comes to mind off-hand.


Yeah, you're likely right it probably makes sense to just run it rather than trying to optimize further. As for excess weight, there is some--mine is a rare variant that has a small cuddy cabin forward, and that thing is really wet and needs to be completely rebuilt. A previous owner's refit of the decks removed any flotation there may have been originally, but also introduced a lot of unsealed wood which is now wet and heavy. It needs a deck job soon.

The boat was re-powered under my ownership. I'm pretty confident the motor height is correct based on a variety of observations and measurements, so I don't think there's really anything to adjust there.

I wouldn't say I have a complaint with the boat's performance, more like trying to get the engine to run at cruise in a more efficient range of the bsfc/hp map, which may be a tall order at 60hp. To your point, though, if I can shed a couple hundred pounds in the refit that could very well do it.


Most boats operate 95%+ of their engine hours in a very narrow RPM range.

Yes and that range is at higher RPMs where the Sharrow props have reduced benefit.

And much harder to repair.

Resolution and positional accuracy are very poor. It’s more like ‘an approximate bag of water detector’.

Gait analysis is complete fiction. Especially with a non-visual approach like this.


Given the number of gait analysis publications over several decades using varying techniques, can you recommend a good review article disproving all of them?


Given the number of publications about curing <pick your uncured disease> over several decades using varying techniques, can you recommend a good review article disproving all of them?

Answer: no need, if it had been cured, it would be cured. And it is not.

My point being that many publications saying "towards X" may mean that we are making some progress towards X, but they don't mean at all that X is possible.


I don’t think anyone has ever tried to publish something disproving all of the gait analysis claims. That would be an odd sort of thing. But I have not seen anything come to something that we could call productized and reliable. It’s relatively easy to publish theoretical papers. Much harder to show it working reliably in the wild.


If you can do that you can infer when someone is home or away.


I am surprised you could look at that page and expect to receive a quality product. The images all look like really low grade AI-generated renderings. The mug in the cupholder and the giftbox image in particular don't stand up to even casual scrutiny.

Not trying to make your situation worse, I just find it interesting what these sites are able to get away with to get people to part with their money.


I just find it interesting what these sites are able to get away with to get people to part with their money.

That reminds me of https://en.wikipedia.org/wiki/Willy%27s_Chocolate_Experience

...which ironically has crossed the line into "so bad it's good" territory.


FWIW I've had good luck with Intertek https://www.intertek.com/ for most testing and certification processes across a variety of products.


Wait, now we have to deal with Carriage Line Return Feeds too?

I wonder if the person who had the idea of virtualizing the typewriter carriage knew how much trouble they would cause over time.


Yeah, and using two bytes for a single line termination (or separation or whatever)? Why make things more complicated and take more space at the same time?


Remember that back in the mists of time, computers used typewriter-esque machines for user interaction and text output. You had to send a CR followed by an LF to go to the next line on the physical device. Storing both characters in the file meant the OS didn't need to insert any additional characters when printing. Having two separate characters let you do tricks like overstriking (just send CR, no LF)


True, but I don’t think there was a common reason to ever send a linefeed without going back to the beginning. Were people printing lots of vertical pipe characters at column 70 or something?

It would’ve been far less messy to make printers process linefeed like \n acts today, and omit the redundant CR. Then you could still use CR for those overstrike purposes but have a 1-byte universal newline character, which we almost finally have today now that Windows mostly stopped resisting the inevitable.


> now that Windows mostly stopped resisting the inevitable

I've been trying to get Visual Studio to stop mucking with line endings and encodings for years. I've searched and set all the relevant settings I could find, including using a .editorconfig file, but it refuses to be consistent. Someone please tell me I'm wrong and there's a way to force LF and UTF-8 no-BOM for all files all the time. I can't believe how much time I waste on this, mainly so diffs are clean.


Ugh, I didn't realize it was still that bad.

How far can you get with setting core.autocrlf on your machine? See https://git-scm.com/book/en/v2/Customizing-Git-Git-Configura...


As I understand it (this may be apocryphal but I've seen it in multiple places) the print head on simple-minded output devices didn't move fast enough to get all the way back over to the left before it started to output the next character. Making LF a separate character to be issued after CR meant that the line feed would happen while the carriage was returning, and then it's ready to print the next character. This lets you process incoming characters at a consistent rate; otherwise you'd need some way to buffer the characters that arrived while the CR was happening.

Now, if you want to use CR by itself for fancy overstriking etc. you'd need to put something else into the character stream, like a space followed by a backspace, just to kill time.


I don't think that's right. Not saying that to argue, more to discuss this because it's fun to think about.

In any event, wouldn't you have to either buffer or use flow-control to pause receiving while a CR was being processed? You wouldn't want to start printing the next line's characters in reverse while the carriage was going back to the beginning.

My suspicion is there was a committee that was more bent on purity than practicality that day, and they were opposed to the idea of having CR for "go to column 0" and newline for "go to column 0 and also advance the paper", even though it seems extremely unlikely you'd ever want "advance the paper without going to column 0" (which you could still emulate it with newline + tab or newline + 43 spaces for those exceptional cases).


I've seen this explanation multiple times through the years, but as I said it's entirely possible it was just a post-hoc thing somebody came up with. But as you said, it's fun to argue/think about, so here's some more. I'm talking about the ASR-33 because they're the archetypal printing terminal in my mind.

If you look at the schematics for an ASR-33, there's just 2 transistors in the whole thing (https://drive.google.com/file/d/1acB3nhXU1Bb7YhQZcCb5jBA8cer...). Even the serial decoding is done electromechanically (per https://www.pdp8online.com/asr33/asr33.shtml), and the only "flow control" was that if you sent XON, the teletype would start the paper tape reader -- there was no way, as far as I can tell, for the teletype to ask the sender to pause while it processes a CR.

These things ran at 110 baud. If you can't do flow control, your only option if CR takes more than 1/10th of a second is to buffer... but if you can't do flow control, and the computer continues to send you stuff at 110 baud, you can't get that buffer emptied until the computer stops sending, so each subsequent CR will fill your buffer just a little bit more until you're screwed. You need the character following CR (which presumably takes about 2/10ths of a second) to be a non-printing character... so splitting out LF as its own thing gives you that and allows for the occasional case where doing a linefeed without a carriage return is desirable.

Curious Marc (https://www.curiousmarc.com/mechanical/teletype-asr-33) built a current loop adapter for his ASR-33, and you'll note that one of the features is "Pin #32: Send extra NUL character after CR (helps to not loose first char of new line)" -- so I'd guess that on his old and probably worn-out machine, even sending LR after CR doesn't buy enough time and the next character sometimes gets "lost" unless you send a filler NUL.

Now, I haven't really used serial communications in anger for over a decade, and I've never used a printing terminal, so somebody with actual experience is welcome to come in and tell me I'm wrong.


That's fascinating! They got a lot of mileage out of those 2 transistors, didn't they?

But see, that's why I think there has to be more to it. That extra LF character wouldn't be enough to satisfy the timing requirements, so you'd also need to send NUL to appropriately pad the delay time. And come to think of it, the delay time would be proportional to the column the carriage was on when you sent the CR, wouldn't it? I guess it's possible that it always went to the end but that seems unlikely, not least because if that were true then you'd never need to send CR at all, just send NUL or space until you calculated it was at EOL.


I think this was primarily about speeding up the measurement time. With just two electrodes you had to wait for the device to achieve equilibrium with the material being measured. If the concentration of oxygen on the probe side of the barrier was higher or lower than the material side you would get false measurements, particularly in low oxygen scenarios because you have oxygem trapped in the probe.

By keeping the state of oxygen inside the probe constant and replacing consumed molecules you now can measure almost instantly.


Yes but how do you do that? that magical third electrode sounds harder to make than the original problem.

Edit: I think I get it now, it's a chemical reaction. By applying a voltage with some polarity to the 3rd electrode you can run the reaction in reverse. Still very hard to achieve because you have to make sure the reactions happen at the same rate with the same efficiency, which is far from trivial. This must be a very high end sensor for all this effort to make sense.


An oxygen molecule does some chemical reaction on the sensor electrode that releases an electron, maybe it's made of iron and turns into rust. If you supply the same current to another electrode to do the opposite reaction, maybe one made of rust that turns into iron, it balances.

The sensors must be consumable with a certain lifetime.


Yes.

Zinc can do this too. But I like silver, its oxide has decent conductivity.

One of the common arrangements on a basic two-electrode sensor is to have one gold electrode to make contact with the electrolyte, and the electrolyte provides conductivity to a sacrificial silver electrode. With electrolyte exposed to the atmosphere through an oxygen-permeable membrane.

As oxygen makes its way through the membrane, it is consumed by the silver at a steady rate and equilibrium is achieved relative to how much oxygen is in the atmosphere. This generates a steady current which is amplified to move a needle on a gauge, where there are knobs to adjust the meter until it displays the correct amount of oxygen during calibration against a known concentration. And must also be calibrated to display zero when there is no oxygen.

Eventually even if the membrane never gets fouled the oxidized silver builds up in the electrolyte chamber and response deteriorates so maintenance is needed. Remove the membrane, polish the silver, put in fresh electrolyte, new membrane, and re-calibrate.

Adding a third electrode opens up a number of further possibilities.

One of them is the option to use an additional inert gold or platinum contact or a salt bridge as electrical reference against the original gold or silver as the sensor. Plus using a more complex circuit than a plain amplifier, apply controlled responsive current to the sacrificial silver at the same time. So rather than directly amplifying the current produced by different concentrations of oxygen existing in the electrolyte (and waiting for it to equilibrate), instead with 3 (or 4) electrodes the ionic silver concentration in the electrolyte can be maintained electronically in a steady state, and as oxygen permeates, the current required to replace the consumed silver is designed to make a dfferent kind of meter move the needle the same way as above. In this way the oxygen concentration in the electrolyte varies to a much more limited extent compared to waiting for it to be depleted from a high amount to zero before the meter will bottom out.

This can be equivalent to constant-ion electrochemical titration.

Disclaimer: I always like to handle things like this like lives depended on it, because lives depended on it.


> Disclaimer: I always like to handle things like this like lives depended on it, because lives depended on it.

You do the best you can. If you can only make inaccurate sensors, make inaccurate sensors. If you can make accurate sensors, make accurate sensors. If they're much more expensive, make both. If your competitor has more accurate sensors, learn how they work.


You definitely don't have to be god tier anything, you just need to know at least a little more than the companies you are consulting for.

This kind of work has been my primary income for the last 4 years or so. Nowhere near on the same level as Feynman, but I know enough about enough other things that I get a lot of reputational referrals.


>you just need to know at least a little more than the companies you are consulting for.

sometimes (i'd argue often, actually), you don't even need that. simply having an outside/fresh perspective and the fact that you aren't part of any of the existing groups/silos is valuable.


Often the most useful thing is just listening to the right people in the company. I wouldn't be 100% surprised if someone in the company in the story had already had the idea for the third electrode, but it took the suggestion from the high-paid consultant to get it taken seriously.


Probably true, but to get the job in the first place you probably need some sort of showy, impressive credentials.


Factory overhiring can happen easily by adding speculative 2nd and 3rd shifts. No need for extra equipment, tooling, or physical capacity.

Lines with specialty equipment and tooling can also often be sped up. That can allow for other jobs to be added to all the functions that support the processes involved before and after the specialty equipment.

New employees also often require training and some apprenticeship time, meaning they can get hired ahead of actual demand.


I am really not debating whether over hiring is possible in factories.

in tech cost of hiring is lower which makes headcount a much easier speculative bet and layoffs a much easier reset when the bet fails.


The 90th percentile of factory workers makes $45,000. The 90th percentile of software engineers makes $230,000. Hiring in tech is insanely expensive; I'm not familiar with hiring in factories, but it can't possibly have the supply problems that tech workers have. I do on average 115 interviews for every person I hire in tech.


That $45,000 factory worker is often operating a $1.5 million piece of equipment with $10-50,000/hr of materials going through it.

It's an entirely different cost structure.


There exists greater issues in hiring for manufacturing when it is for new shifts. Master level technicians and foremen willing to work those hours can be exceptionally difficult to find, and everything flows out of these people. While similar issues likely exist relating to discovering talent for software development, I speculate that the factory will, in practice, have a harder time finding people (for new shifts).

My experience with seeing new shifts added is initially with only specific processes, and even with those it is with journeyman level technicians running a small crew to support relieving a bottleneck in production.

Alternatively, manufacturers can outsource until they have enough volume to add a shift, but across the economy the net is just transferring production from one facility to another.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: