I feel like I need to point out that this tool does not do, in any way, what the title claims. Parsing the output of the system profiler tool will not tell you whether your cable is "good", which in practice tends to mean that it supports the protocol the user cares about at that moment. For some examples:
If you connect a Thunderbolt only cable to a USB4 only device, this approach will give you no information about why things are not working.
If you connect a USB2 only cable to a Display-Port Alternate Modal display, this approach will not tell you anything about why your display is not turning on.
It's more complicated than "this cable is good/bad". I had a suspicion about one of my cables for months, but just last week I confirmed it with a device that shows the volts/amps/watts/total kwh passing through it: I have a USB-C cable with orientation. Plugged in one way it transfers about 5x more power than the other way, and in the lower orientation it's low enough for the earbud case I use it with to not charge.
My pixel 7 seems to have fully died out of the blue while charging two days ago, using a USB-C I thought might be getting a little flaky (connected to my mac, I'd occasionally get repeating disconnects). I wonder if something along these lines could be the culprit.
I picked it up to find it had shut itself off, and now won't accept any charge, wireless or wired from any combination of power sources and cables. No signs of life at all.
Probably 180 degrees rotation in the plug (on either end). It commonly happens if one of the contacts or conductors for USB-PD signalling is not working correctly. (because of the way the pinout is designed to work either way around, the conductors that are used for signalling swap roles depending on the orientation)
Thats so weird, did you wind up coloring one end or something? I still wish we would add color to USB C wires like USB 3 has to emphasize features and expected uses. USB C was a much needed change from USB3 and 2 in terms of being reversible and superior but every manufacturer implements the cables differently and its confusing and hard to figure out which cable is best for what.
The audio community love this sort of thing and will pay top dollar for unidirectional cables. Reproducible data proving the claims could be worth millions.
well, if you listen to audio you would not want the audio to accidentally get confused and head back to where it came from halfway down the cable right?
Audio people lost me when they complained about tape hiss being an issue with Digital Audio Tape. They then moved on to gold plated terminals and left twisted vs right twisted pairs wires inside multi conductor cables.
“This cut signal reflections, yielding brighter high hats without the brassiness of two-directional cabling. Bass was particularly clear and rumbly without the muddiness we heard from Monoprice cords.”
No, there really is an intrinsic orientation, at least once a cable is plugged in.
The receptacles are symmetric, but a full connection is not. The cable only connects CC through end-to-end on one of A5 or B5, but not both, which lets the DFP detect which of A5 or B5 should be CC. The one not used for CC is then used to power the e-marker in the cable, if any.
This is also true for legacy adapters; for example, for C-to-A USB 3 adapters, the host needs to know which of the two high-speed pairs to use (as USB 3 over A/B connectors only supports one lane, while C supports two).
I think that I have a specific cable-device-orientation that is broken. Meaning, I think a particular USB C cable won't charge my phone if it's plugged in 'backwards'.
I always assumed that USB C cables use different pins depending on orientation, and that some pins on the cable wore down.
My guess would be they used a one-sided pcb to connect the cable to and used half the wires. Some sockets internally link the power and ground pins, so it works both ways, but you get no resistor network and thus only standard 5v which gives you 500ma max (at best). With the resistors connected by the cable it's about 900ma to 3a which is probably what happens plugged in "correctly". Or some other magic happens on one side of the PCB to fool the charger into pushing the full 3A.
Shouldn't a compliant USB-C DFP not supply Vbus without the resistor network, though, so there should be no charging at all? (Not that all DFPs necessarily do the correct thing, of course.)
It's CC2/VCONN used for eMarker. That pin may be terminated inside the cable and used to power eMarker chip. It can also be used for orientation sensing. I think.
It is not unheard of to have single damaged lines/connector-joints within a cable. The question is whether your cable was designed that way or whether it was damaged in a way that made it do this.
No, I don't get it. Firstly, the normal system command output is not hard to read, but secondly, this output doesn't list any of the capabilities of the cables, just the devices at the ends of them. Perhaps showing an example of the output when the device is plugged in through the wrong cable would have helped. Does the tool produce a similar warning to the system popup, that is "this cable and device are mismatched"?
As far as I understand, the idea is to determine whether the cable is the bottleneck from a hardcoded list of theoretical device capabilities with actually observed connection speeds as reported by the OS.
It would be nice to just compare with the device's reported maximum capability, but I'm not sure whether macOS exposes that in any API.
This only shows you the minimum of what the cable and adapter support together, though. I believe this is a fundamental limitation of the protocol; the source won't tell you about voltage/current combinations not supportable by the cable.
The reason is probably that anything faster than USB 2.0 (480 Mbit/s) and supporting power over 3A/60V will need to have an active marker, and to read that, you'll need something slightly more complex than a connection tester.
That said, these things do seem to exist at this point, as sibling comments have pointed out.
As an aside, it's a real shame devices with USB-C ports don't offer this out of the box. They need to be able to read the marker anyway for regular operation!
Edit: This will test whether the cable is functioning properly. It will show the connections and indicate whether the cable supports only power or also data transfer. However, it won’t provide information about the USB-C cable type or its speed capabilities.
I'm curious as to why it is so expensive? Admittedly I know very little about electronics, and naturally the validation testing that a cable manufacturer does is going to be more thorough, but for consumer-grade testing couldn't we just have an FPGA or microcontroller scream the fibonnaci sequence in one end and another listen for the fibonnaci sequence on the other end? Sort of like memtest but instead ramping up speed until the transmission becomes garbled.
For a "regular" USB C that supports USB 2.0 speeds (and is rated for 60W and therefore lacks an internal e-marker chip), there's just 5 wires inside: Two for data, two for power, and one for CC. There's nothing particularly complex about testing those wires for end-to-end continuity (like a cheapo network cable tester does).
A charging-only cable requires only 3 wires.
But fancier cables bring fancier functions. Do you want to test if the cable supports USB 3? With one lane, or two lanes? USB 4? Or what of the extra bits supporting alt modes like DisplayPort and MHL and the bag of chips that is Thunderbolt -- does that need all tested, too? (And no, that earlier 120Gbps figure isn't a lie.)
And power? We're able to put up to -- what -- 240W through some of these cables, right? That's a beefy bit of heat to dissipate, and those cables come with smarts inside of them that need negotiated with.
I agree that even at extremes, it's still somewhere within the realm of some appropriate FPGA bits or maybe a custom ASIC, careful circuit layout, a big resistor, and a power supply. And with enough clones from the clone factories beating eachother up on pricing, it might only cost small hundreds of dollars to buy.
So then what? You test the fancy USB-C ThunderBolt cable with the expensive tester, and pack it up for a trip for an important demo -- completely assured of its present performance. And when you get there, it doesn't work anyway.
But the demo must proceed.
So you find a backup cable somewhere (hopefully you thought to bring one yourself, because everyone around you is going to be confused about whatever it is that makes your "phone charger" such a unique and special snowflake that the ones they're trying to hand to you cannot ever be made to work), plug that backup in like anyone else would even if they'd never heard the term "cable tester," and carry on.
The tester, meanwhile? It's back at home, where it hasn't really done anything but cost money and provide some assurances that turned out to be false.
So the market is limited, the clone factories will thus never ramp up, and the tester no longer hypothetically costs only hundreds of dollars. It's right back up into the multiple-$k range like the pricing for other low-volume boutique test gear is.
(I still want one anyway, but I've got more practical things to spend money on...like a second cable to use for when the first one inevitably starts acting shitty.)
> The script parses macOS’s system_profiler SPUSBHostDataType2 command, which produces a dense, hard-to-scan raw output
I couldn’t find source (the link in the article points to a GitHub repo of a user’s home directory. I hope for them it doesn’t contain secrets), but on my system, system_profiler -json produces json output. From that text, it doesn’t seem they used that.
I hope this doesn't become a trend. Moving it to go means you need to compile it before you run it, or blindly run an uninspected binary from some random guy
It's not like the performance of this could have motivated it
No idea, I haven't had a look at this code in particular.
I'm just saying that I've seen several "small tools that could have been shell scripts" in Go or another more structured language and never wished they were shell scripts instead.
I mean, you shouldn't blindly run a shell script anymore than a binary anyways. And if you're reading the code I'd rather read Go than bash any day. That said, yes there is an extra compilation step.
Presumably there is a sensible way to do this in go by calling an API and getting the original machine-readable data rather than shelling out to run an entire sub-process for a command-line command and parsing its human-readable (even JSON) output. Especially as it turns out that the command-line command itself runs another command-line command in its turn. StackExchange hints at looking to see what API the reporter tool under /System/Library/SystemProfiler actually queries.
> But you didn't see that the source was one level up in the directory tree from the untrustworthy binary blob?
No, silly me. Shortly searched for a src directory, but of course, should have searched for a bin directory, as that’s where vibe coding stores sources /s.
lsusb will get you this info in Linux, but I like the idea of a little wrapper tool to make the output easier to parse.
480 vs. 5000 Mbps is a pernicious problem. It's very easy to plug in a USB drive and it looks like it works fine and is reasonable fast. Right until you try to copy a large file to it and are wondering why it is only copying 50MBytes/second.
It doesn't help that the world is awash in crappy charging A-to-C cables. I finally just throw me all away.
I remember hearing it’s even possible to plug in a USB-A plug too slowly, making the legacy pins make contact first, which results in a 480 Mbps connection – despite the cable, the host, and the device all supporting superspeed!
Couldn't figure out why my 5-disk USB enclosure was so ungodly slow. Quickly I saw that it was capping suspiciously close to some ~40MB/s constant, so 480Mbps.
lsusb -v confirmed. As it happened I did some maintenance and had to unplug the whole bay.
Since the port was nearly tucked against a wall I had to find the port by touch and insert somewhat slowly in steps (brush finger/cable tip to find port, insert tip at an angle, set straight, push in) but once in place it was easy to unplug and insert fast...
This was driving me "vanilla ice cream breaks car" nuts...
That's the price of strong backwards compatibility. Otherwise, you wouldn't be able to use a USB 3 (superspeed) device on a USB 3 host port with a USB 2 cable at all.
And if you hate this, you should probably never look into these (illegal by the spec, but practically apparently often functional) splitters that separate the USB 2 and 3 path of a USB 3 capable A port so that you can run two devices on them without a hub ;)
On a somewhat related note, I like the IO shield of my new MSI motherboard - the USB ports are tersely labeled "5G", "10G", "40G" (and a few lingering "USB 2.0").
Content wise a nice idea, but I also like the conclusion about how AI made this possible in the first place. The author itself mentions this motivation. AI is undoubtedly perfect for utilities, small (even company internal) tools for personal use where maintainability is secondary as you can ditch the tool or rebuild it quickly.
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
I've found that to be very true. For bigger projects, I've had rather mixed results from ai but for small utility scripts, it's perfect.
But like the author, I've found that it's usually better to have the llm output python, go or rust than use bash. So I've often had to ask it to rewrite at the beginning. Now I just directly skip bash
That all the naysayers are missing the tons of small wins that are happening every single day by people using AI to write code, that weren't possible before.
I specified in a thread a few weeks ago that we manage a small elixir-rust library, and I have never coded rust in my life. Sure, it's about 20 lines of rust, mostly mapping to the underlying rust lib, but so far I've used claude to just maintain it (fix deprecations, perform upgrades, etc).
Ugh. I appreciate the tool and I suppose I can appreciate AI for making the barrier to entry for writing such a tool lower. I just don't like AI, and I will continue with my current software development practices and standards for production-grade code - code coverage, linting, manual code reviews, things like that.
At the same time though I'm at a point in my career where I'm cynical and thinking it really doesn't matter because whatever I build today will be gone in 5-10 years anyway (front-end mainly).
is it worth it for everything? if you need a bash script that takes some input and produces some output. Does it matter if it's from an AI? It has to get through code review, the person who made it has to read through it before code review so they don't look like an ass.
yeah recently I needed a script to ingest individual json files into an sqlite db. I could have spent half the day writing, or asked an AI to write it and spend 10 minutes checking the data in the DB is correct.
There are plenty of non critical aspects that can be drastically accelerated, but also plenty of places where I know I don't want to use today's models to do the work.
I worked with contractor for a contractor who had AI write a script to update a repository (essentially doing a git pull). But for some strange reason it was using the GitHub API instead of git. The best part is if the token wasn't set up properly it overwrote every file (including itself) with 404s.
Ingesting json files into sqlite should only take half a day if you're doing it in C or Fortran for some reason (maybe there is a good reason). In a high level language or shouldn't take much more than 10 minutes in most cases, I would think?
regarding how long the ingestion should take to implement, I'm going to say: it depends!
It depends on how complex the models are, because now you need to parse your model before inserting them. Which means you need tables to be in the right format. And then you need your loops, for each file you might have to insert anywhere between 5 to nested 20 entities. And then you either have to use an ORM or write each SQL queries.
All of which I could do obviously, and isn't rocket science, just time consuming.
The author literally says this is vibe-coded. You even quoted it. How the hell is this "Trojan horse"? Did the Greeks have a warning sign saying "soldiers inside" on their wooden horse?
Because it’s not in the title, and I personally prefer up-front warnings when generative “AI” is used in any context, whether it’s image slop or code slop
I'm not a go developer and this kind of thing is far from my area of expertise. Do you mind giving some examples?
As far as I can tell skimming the code, and as I said, without knowledge of Go or the domain, the "shape" of the code isn't bad. If I got any vibes (:))from it, it was lack of error handling and over reliance on exactly matching strings. Generally speaking, it looks quite fragile.
FWIW I don't think the conclusion is wrong. With limited knowledge he managed to build a useful program for himself to solve a problem he had. Without AI tools that wouldn't have happened.
There's a lot about it that isn't great. It treats Go like a scripting language, it's got no structure (1000+ lines in a single file), nothing is documented, the models are flat, no methods, it hard codes lots of strings, even the flags are string comparisons instead of using the proper tool, regex compiles and use inlined, limited device support based on some pre-configured, hard-coded strings, some assumptions made on storage device speeds based on its device name: nvme=fast, hdd=slow, etc.
On the whole, it might work for now, but it'll need recompiling for new devices, and is a mess to maintain if any of the structure of the data changes.
If a junior in my team asked me to review this, they'd be starting again; if anyone above junior PRd it, they'd be fired.
Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
This aligns with the hypothesis that we should see and lots lots of "personalized" or single purpose software if vibe coding works. This particular project is one example. Are there a ton more out there?
+1 here, with the latest Chrome v3 manifest shenanigans, the Pushbullet extension stopped working and the devs said they have no interest in pursuing that (understandable).
I always wanted a dedicated binary anyway, so 1 hour later I got: https://github.com/emilburzo/pushbulleter (10 minutes vibe coding with Claude, 50 minutes reviewing code/small changes, adding CI and so on). And that's just one where I put in the effort of making it open source, as others might benefit, nevermind the many small scripts/tools that I needed just for myself.
So I share the author's sentiments, before I would have considered the "startup cost" too high in an ever busy day to even attempt it. Now after 80% of what I wanted was done for me, the fine tuning didn't feel like much effort.
Yep! Nothing worth sharing/publishing from me, but quite a few mini projects that are specific to my position at a small non-tech company I work for. For example we send data to a client on a regular basis, and they send back an automated report with any data issues (missing fields, invalid entries, etc) in a human-unfriendly XML format. So I one-shotted a helper script to parse that data and append additional information from our environment to make it super easy for my coworkers to find and fix the data issues.
Definitely.... I just bought a new NAS and after moving stuff over, and downloading some new movies and series, "Vibe coding" a handful of scripts which check completeness of episodes against some database, or the difference between the filesystem and what plex recognized, is super helpful. I noticed one movie which was obviously compressed from 16:9 to 4:3, and two minutes later, I had a script which can check my entire collection for PAR/DAR oddities and provides a way to correct them using ffmpeg.
These are all things I could do myself but the trade off typically is not worth it. I would spend too much time learning details and messing about getting it to work smoothly. Now it is just a prompt or two away.
Just an hour ago I "made" one in 2 minutes to iterate through some files, extract metadata, and convert to CSV.
I'm convinced that hypothesis is true. The activation energy (with a subscription to one of the big 3, in the current pre-enshittification phase) is approximately 0.
Edit: I also wouldn't even want to publish these one-off, AI-generated scripts, because for one they're for specific niches, and for two they're AI generated so, even though they fulfilled their purpose, I don't really stand behind them.
>Just an hour ago I "made" one in 2 minutes to iterate through some files, extract metadata, and convert to CSV.
Okay but lots of us have been crapping out one off python scripts for processing things for decades. It's literally one of the main ways people learned python in the 2000s
What "activation energy" was there before? Open a text file, write a couple lines, run.
Sometimes I do it just from the interactive shell!
Like, it's not even worth it to prompt an AI for these things, because it's quicker to just do it.
A significant amount of my workflow right now is a python script that takes a CSV, pumps it into a JSON document, and hits a couple endpoints with it, and graphs some stats.
All the non-specific stuff the AI could possibly help with are single lines or function calls.
The hardest part was teasing out python's awful semantics around some typing stuff. Why is python unwilling to parse an int out of "2.7" I don't know, but I wouldn't even had known to prompt an AI for that requirement, so no way it could have gotten that right.
It's like ten minutes to build a tool like this even without AI. Why weren't you before? Most scientists I know build these kind of microscripts all the time.
Because even though I can learn some random library, I don’t really care to. I can do the architecture, I don’t care to spend half an hour understanding deeply how arguments to some API work.
Example: I rebuilt my homelab in a weekend last week with claude.
Setup terraform / ansible / docker for everything, and this was possible because I let claude all the arguments / details. I used to not bothered because I thought it was tedious.
Absolutely. I can come home from a long day of video meetings, where normally I'd just want to wind down. But instead I spend some time instructing an AI how to make a quality of life improvement for myself.
For me, claude churns like 10-15 python scripts a day. Some of these could be called utilities. It helps with debugging program outputs, quick statistical calculations, stuff I would use excel for. Yesterday it noticed a discrepancy that lead to us finding a bug.
So yes there is a ton but why bother publishing and maintaining them now that anyone can produce them? Your project is not special or worthwhile anymore.
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
adding to the theory that soon we gonna prefer to write, rather download ready-made code, because the friction is super low
Interesting. Is there a way to adapt this for Linux or Windows? Many users, not just Mac users, face issues with USB-C cables. Practical cross-platform tools could be very helpful.
> I was punching through my email actively as Claude was chugging on the side.
I wonder how much writing these scripts cost. Were they done in Claude's free tier, pro, or higher? How much of their allotted usage did it require?
I wish more people would include the resources needed for these tasks. It would really help evaluate where the industry is in terms of accessibility. How much is it reserved for those with sufficient money and how that scales.
> Go also has the unique ability to compile a cross-platform binary, which I can run on any machine.
Huh? Is this true? I know Go makes cross-compiling trivial - I've tried it in the past, it's totally painless - but is it also able to make a "cross platform binary" (singular)?
How would that work? Some kind of magic bytes combined with a wrapper file with binaries for multiple architectures?
It's just because vibe coding is still "new" and various people have mixed results with it. This means that anecdotes today of either success or failure still carry some "signal".
It will take some time (maybe more than a decade) for vibe coding to be "old" and consistently correct enough where it's no longer mentioned.
Same thing happened 30 years ago with "The Information Superhighway" or "the internet". Back then, people really did say things like, "I got tomorrow's weather forecast of rain from the internet."
Why would they even need to mention the "internet" at all?!? Because it was the new thing back then and the speaker was making a point that they didn't get the weather info from the newspaper or tv news. It took some time for everybody to just say, "it's going to rain tomorrow" with no mentions of internet or smartphones.
vibe coding in my understanding is losing/confusing the mental model of your codebase, you don't know what is what and what is where. i haven't found a term to define "competently coding with ai as the interface".
I mean, they seem to address that pretty directly in the post
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
"My static blog templating system is based on programming language X" is the stereotypical HN post. In theory the choice of programming language doesn't matter. But HNers like to mention it in the title anyway.
I don't know if that necessarily helps though, because I've seen USB3 cables that seemingly have the bandwidth and power capabilities, but won't do video.
Capabilities are printed on the side of ethernet cables and the text printed on the cable rarely seems related to the actual capabilities of the ethernet plug. Some cat5e cables are rated for 1000mbps but will happily run 5000mbps or 2500mbps (because those standards came after the text on the cable was last updated), other "cat6" cables are scams and struggle achieving gigabit speeds without packet loss.
Plus, it doesn't really matter if you put "e-marker 100W USB3 2x2 20gbps" on a cable when half those features depend on compatibility from both sides of the plug (notably, supposedly high-end devices not supporting 2x2 mode or DP Alt mode or charging/drawing more than 60W of power).
USB cables push the boundaries of signal integrity hard enough that unless it's a 1 foot passive cable you're not really going to get any surprise speed boosts.
And when they upped the max voltage they didn't do it for preexisting cables, no matter what the design was.
> those features depend on compatibility from both sides of the plug
That's easy to understand. Cable supports (or doesn't support) device, it can't give new abilities to the device. It doesn't make labeling less valuable.
We used to what ? Back in the day there are countless cables with no printing. Sometime the only way to know if they are 3.0 or not if checking if they have blue connector.
Vibe coding. Producing code without considering how we should approach the problem. Without thinking where exactly is the problem. This is like Electron, all over again.
Of course I don't have any problems with the author writing the tool, because everyone should write what the heck they want and how they want it. But seeing it gets popular tells me that people have no idea what's going on.
if the author knows what they're doing and understand the model of the code at least, i don't understand the reason mentioning that it was vibe coded. maybe declaring something is vibe coded removes part of the responsibility nowadays?
HN guidelines say one shouldn't question whether another commenter has read TFA, so I won't do that. But TFA explains exactly why it was vibe coded, and exactly why they're mentioning that it was vibe coded, which is that that was the central point of TFA.
I verify dynamic linking, ensure no superfluous dylibs are required. I verify ABI requirements and ensure a specific version of glibc is needed to run the executable. I double-check if the functions I care about are inlined. I consider if I use stable public or unstable private API.
But I don't mean that the author doesn't know what's going on in his snippet of code. I'm sure he knows what's going on there.
I mean that upvoters have no idea what's going on, by boosting vibe coding. People who upvote this are the reason of global software quality decline in near future.
All your stuff is still pretty high level compared to the pure metal inside CPU. Do you which register the compiler decied to use to store this variable, or does the CPU will take this execution branch or not ?
It's all abstraction, we all need to not know some low level layer to do our job, so please stop gatekeeping it.
fwiw, it would take 10 minutes to download a linux docker image and build it in go to test. The harder part is getting the information from a different API on Linux.
A Linux Docker image, probably doesn’t have any USB devices exposed to it-well, it depends on exactly how you run it, but e.g. if you use Docker Desktop for Mac, its embedded Linux VM doesn’t have any USB passthrough support. This is the kind of thing where a physical Linux host (laptop/desktop/NUC/RPi/etc) is much more straightforward than running Linux in a VM (or as a K8S pod in a datacenter somewhere)
I feel like we kind of got monkey’s paw’ed on USB-C. I remember during the 2000’s-2010’s people were drowning in a sea of disparate and incompatible connectors for video, audio, data, power, etc. and we’re longing for “One Port To Rule Them All” that could do everything in one cable. We kind of got that with USB-C, except now, you see a USB-C cable/port and you have no idea if it supports data only, data + charging, what speeds of data/charging, does it support video? maybe it does, maybe it doesn’t. at least it can plug in both ways… most of the time
I bought the coolest, fattest USB-C cables, and I failed to read the description enough to hear they only support USB 2 speeds! They work fine for the specific use I have for them, but I wish I could use ‘em for everything!
I picked it up to find it had shut itself off, and now won't accept any charge, wireless or wired from any combination of power sources and cables. No signs of life at all.
Let's say for C-to-C, are you talking about swapping the head/tail? Or simply connecting at a different angle (180 degrees)?
Is there any way to check this other than experiment?
My "solution" so far has been to not buy cheap cables and just hope I get quality in return.
Well sure, a standards-compliant cable will work in either orientation, but it's always possible for some but not all of the pins or wires to break.
Maybe the negotiation can fail & the plugged in orientation is then the only one that works?
The receptacles are symmetric, but a full connection is not. The cable only connects CC through end-to-end on one of A5 or B5, but not both, which lets the DFP detect which of A5 or B5 should be CC. The one not used for CC is then used to power the e-marker in the cable, if any.
This is also true for legacy adapters; for example, for C-to-A USB 3 adapters, the host needs to know which of the two high-speed pairs to use (as USB 3 over A/B connectors only supports one lane, while C supports two).
I always assumed that USB C cables use different pins depending on orientation, and that some pins on the cable wore down.
Maybe that's what happened here?
It was not a cheap cable, it was a medium-priced one with good reviews from a known brand.
It would be nice to just compare with the device's reported maximum capability, but I'm not sure whether macOS exposes that in any API.
Hardware -> USB
I also use the app to check what wattage my cables are when charging my MacBook (Hardware -> Power)
Great for identifying not just bad cables, but also data rates.
https://www.kickstarter.com/projects/electr/ble-caberqu-a-di...
There are plenty for Ethernet, but none such ones for USB. Was I looking with the wrong keywords or such device does not exist?
Note: I have a dongle that measures the power when inserted between the laptop and the charger, this is not what I am looking for
https://treedix.com/collections/best-seller/products/treedix...
That said, these things do seem to exist at this point, as sibling comments have pointed out.
As an aside, it's a real shame devices with USB-C ports don't offer this out of the box. They need to be able to read the marker anyway for regular operation!
https://fr.aliexpress.com/item/1005007509475055.html
Edit: This will test whether the cable is functioning properly. It will show the connections and indicate whether the cable supports only power or also data transfer. However, it won’t provide information about the USB-C cable type or its speed capabilities.
Related: If you are looking for cables, this guy has tested a bunch (mainly for charging capabilities) https://www.allthingsoneplace.com/usb-cables-1
And some metrics on internal reflections.
The later would require multi-thousands dollar machine.
For a "regular" USB C that supports USB 2.0 speeds (and is rated for 60W and therefore lacks an internal e-marker chip), there's just 5 wires inside: Two for data, two for power, and one for CC. There's nothing particularly complex about testing those wires for end-to-end continuity (like a cheapo network cable tester does).
A charging-only cable requires only 3 wires.
But fancier cables bring fancier functions. Do you want to test if the cable supports USB 3? With one lane, or two lanes? USB 4? Or what of the extra bits supporting alt modes like DisplayPort and MHL and the bag of chips that is Thunderbolt -- does that need all tested, too? (And no, that earlier 120Gbps figure isn't a lie.)
And power? We're able to put up to -- what -- 240W through some of these cables, right? That's a beefy bit of heat to dissipate, and those cables come with smarts inside of them that need negotiated with.
I agree that even at extremes, it's still somewhere within the realm of some appropriate FPGA bits or maybe a custom ASIC, careful circuit layout, a big resistor, and a power supply. And with enough clones from the clone factories beating eachother up on pricing, it might only cost small hundreds of dollars to buy.
So then what? You test the fancy USB-C ThunderBolt cable with the expensive tester, and pack it up for a trip for an important demo -- completely assured of its present performance. And when you get there, it doesn't work anyway.
But the demo must proceed.
So you find a backup cable somewhere (hopefully you thought to bring one yourself, because everyone around you is going to be confused about whatever it is that makes your "phone charger" such a unique and special snowflake that the ones they're trying to hand to you cannot ever be made to work), plug that backup in like anyone else would even if they'd never heard the term "cable tester," and carry on.
The tester, meanwhile? It's back at home, where it hasn't really done anything but cost money and provide some assurances that turned out to be false.
So the market is limited, the clone factories will thus never ramp up, and the tester no longer hypothetically costs only hundreds of dollars. It's right back up into the multiple-$k range like the pricing for other low-volume boutique test gear is.
(I still want one anyway, but I've got more practical things to spend money on...like a second cable to use for when the first one inevitably starts acting shitty.)
I couldn’t find source (the link in the article points to a GitHub repo of a user’s home directory. I hope for them it doesn’t contain secrets), but on my system, system_profiler -json produces json output. From that text, it doesn’t seem they used that.
First Go source: https://github.com/kaushikgopal/dotfiles/blob/f0f158398b5e4d...
started out as a shell script but switched to a go binary (which is what is linked).
It's not like the performance of this could have motivated it
Performance isn't everything; readability and maintainability matter too.
Is that case for this vibe-coded thing? https://news.ycombinator.com/item?id=45513562
I'm just saying that I've seen several "small tools that could have been shell scripts" in Go or another more structured language and never wished they were shell scripts instead.
* https://github.com/kaushikgopal/dotfiles/blob/master/bin/usb...
Presumably there is a sensible way to do this in go by calling an API and getting the original machine-readable data rather than shelling out to run an entire sub-process for a command-line command and parsing its human-readable (even JSON) output. Especially as it turns out that the command-line command itself runs another command-line command in its turn. StackExchange hints at looking to see what API the reporter tool under /System/Library/SystemProfiler actually queries.
No, silly me. Shortly searched for a src directory, but of course, should have searched for a bin directory, as that’s where vibe coding stores sources /s.
480 vs. 5000 Mbps is a pernicious problem. It's very easy to plug in a USB drive and it looks like it works fine and is reasonable fast. Right until you try to copy a large file to it and are wondering why it is only copying 50MBytes/second.
It doesn't help that the world is awash in crappy charging A-to-C cables. I finally just throw me all away.
Couldn't figure out why my 5-disk USB enclosure was so ungodly slow. Quickly I saw that it was capping suspiciously close to some ~40MB/s constant, so 480Mbps.
lsusb -v confirmed. As it happened I did some maintenance and had to unplug the whole bay.
Since the port was nearly tucked against a wall I had to find the port by touch and insert somewhat slowly in steps (brush finger/cable tip to find port, insert tip at an angle, set straight, push in) but once in place it was easy to unplug and insert fast...
This was driving me "vanilla ice cream breaks car" nuts...
And if you hate this, you should probably never look into these (illegal by the spec, but practically apparently often functional) splitters that separate the USB 2 and 3 path of a USB 3 capable A port so that you can run two devices on them without a hub ;)
(which is inconvenient because USB 3.2 Gen 2x2 20 Gbps external SSD cases are much cheaper than USB 4 cases for now).
Also, he is calling a binary a script, which i find suspicious. This task looks like it should have been a script.
USB-IF in all their wisdom used "USB 3.2" to refer everything from 5 gbps (USB 3.2 Gen 1×1 ) to 20 gbps
https://en.wikipedia.org/wiki/USB_3.0#USB_3.2
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
But like the author, I've found that it's usually better to have the llm output python, go or rust than use bash. So I've often had to ask it to rewrite at the beginning. Now I just directly skip bash
That all the naysayers are missing the tons of small wins that are happening every single day by people using AI to write code, that weren't possible before.
I specified in a thread a few weeks ago that we manage a small elixir-rust library, and I have never coded rust in my life. Sure, it's about 20 lines of rust, mostly mapping to the underlying rust lib, but so far I've used claude to just maintain it (fix deprecations, perform upgrades, etc).
This simply wasn't possible before.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
Looking at the github source code, I can instantly tell. It's also full of gotchas.
At the same time though I'm at a point in my career where I'm cynical and thinking it really doesn't matter because whatever I build today will be gone in 5-10 years anyway (front-end mainly).
There are plenty of non critical aspects that can be drastically accelerated, but also plenty of places where I know I don't want to use today's models to do the work.
Ingesting json files into sqlite should only take half a day if you're doing it in C or Fortran for some reason (maybe there is a good reason). In a high level language or shouldn't take much more than 10 minutes in most cases, I would think?
It depends on how complex the models are, because now you need to parse your model before inserting them. Which means you need tables to be in the right format. And then you need your loops, for each file you might have to insert anywhere between 5 to nested 20 entities. And then you either have to use an ORM or write each SQL queries.
All of which I could do obviously, and isn't rocket science, just time consuming.
As far as I can tell skimming the code, and as I said, without knowledge of Go or the domain, the "shape" of the code isn't bad. If I got any vibes (:))from it, it was lack of error handling and over reliance on exactly matching strings. Generally speaking, it looks quite fragile.
FWIW I don't think the conclusion is wrong. With limited knowledge he managed to build a useful program for himself to solve a problem he had. Without AI tools that wouldn't have happened.
On the whole, it might work for now, but it'll need recompiling for new devices, and is a mess to maintain if any of the structure of the data changes.
If a junior in my team asked me to review this, they'd be starting again; if anyone above junior PRd it, they'd be fired.
I have a usb to sata plugged in and it's labeled as [Problem].
https://news.ycombinator.com/item?id=45513256
This aligns with the hypothesis that we should see and lots lots of "personalized" or single purpose software if vibe coding works. This particular project is one example. Are there a ton more out there?
I always wanted a dedicated binary anyway, so 1 hour later I got: https://github.com/emilburzo/pushbulleter (10 minutes vibe coding with Claude, 50 minutes reviewing code/small changes, adding CI and so on). And that's just one where I put in the effort of making it open source, as others might benefit, nevermind the many small scripts/tools that I needed just for myself.
So I share the author's sentiments, before I would have considered the "startup cost" too high in an ever busy day to even attempt it. Now after 80% of what I wanted was done for me, the fine tuning didn't feel like much effort.
These are all things I could do myself but the trade off typically is not worth it. I would spend too much time learning details and messing about getting it to work smoothly. Now it is just a prompt or two away.
This is currently the vibe on consulting, possible ways to reduce headcount, pun intended.
Just an hour ago I "made" one in 2 minutes to iterate through some files, extract metadata, and convert to CSV.
I'm convinced that hypothesis is true. The activation energy (with a subscription to one of the big 3, in the current pre-enshittification phase) is approximately 0.
Edit: I also wouldn't even want to publish these one-off, AI-generated scripts, because for one they're for specific niches, and for two they're AI generated so, even though they fulfilled their purpose, I don't really stand behind them.
Okay but lots of us have been crapping out one off python scripts for processing things for decades. It's literally one of the main ways people learned python in the 2000s
What "activation energy" was there before? Open a text file, write a couple lines, run.
Sometimes I do it just from the interactive shell!
Like, it's not even worth it to prompt an AI for these things, because it's quicker to just do it.
A significant amount of my workflow right now is a python script that takes a CSV, pumps it into a JSON document, and hits a couple endpoints with it, and graphs some stats.
All the non-specific stuff the AI could possibly help with are single lines or function calls.
The hardest part was teasing out python's awful semantics around some typing stuff. Why is python unwilling to parse an int out of "2.7" I don't know, but I wouldn't even had known to prompt an AI for that requirement, so no way it could have gotten that right.
It's like ten minutes to build a tool like this even without AI. Why weren't you before? Most scientists I know build these kind of microscripts all the time.
Example: I rebuilt my homelab in a weekend last week with claude.
Setup terraform / ansible / docker for everything, and this was possible because I let claude all the arguments / details. I used to not bothered because I thought it was tedious.
https://janschutte.com/posts/program-for-one.html
https://github.com/shepherdjerred/homelab
So yes there is a ton but why bother publishing and maintaining them now that anyone can produce them? Your project is not special or worthwhile anymore.
Last weekend I had a free hour and built two things while sat in a cafe:
- https://yourpolice.events, that creates a nice automated ICS feed for upcoming events from your local policing team.
- https://github.com/AndreasThinks/obsidian-timed-posts, an Obsidian plugin for "timed posts" (finish it in X minutes or it auto-deletes itself)
adding to the theory that soon we gonna prefer to write, rather download ready-made code, because the friction is super low
Windows: There's an example in the WDK here: https://github.com/Microsoft/Windows-driver-samples/tree/mai...
I wonder how much writing these scripts cost. Were they done in Claude's free tier, pro, or higher? How much of their allotted usage did it require?
I wish more people would include the resources needed for these tasks. It would really help evaluate where the industry is in terms of accessibility. How much is it reserved for those with sufficient money and how that scales.
Huh? Is this true? I know Go makes cross-compiling trivial - I've tried it in the past, it's totally painless - but is it also able to make a "cross platform binary" (singular)?
How would that work? Some kind of magic bytes combined with a wrapper file with binaries for multiple architectures?
It will take some time (maybe more than a decade) for vibe coding to be "old" and consistently correct enough where it's no longer mentioned.
Same thing happened 30 years ago with "The Information Superhighway" or "the internet". Back then, people really did say things like, "I got tomorrow's weather forecast of rain from the internet."
Why would they even need to mention the "internet" at all?!? Because it was the new thing back then and the speaker was making a point that they didn't get the weather info from the newspaper or tv news. It took some time for everybody to just say, "it's going to rain tomorrow" with no mentions of internet or smartphones.
> Two years ago, I wouldn’t have bothered with the rewrite, let alone creating the script in the first place. The friction was too high. Now, small utility scripts like this are almost free to build.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
But in general you are right. The article was for developers so mentioning the tool/language/etc. is relevant.
I wouldn't trust this as source code until after a careful audit. No way I'm going to trust a vibe-coded executable.
I don't know if that necessarily helps though, because I've seen USB3 cables that seemingly have the bandwidth and power capabilities, but won't do video.
Plus, it doesn't really matter if you put "e-marker 100W USB3 2x2 20gbps" on a cable when half those features depend on compatibility from both sides of the plug (notably, supposedly high-end devices not supporting 2x2 mode or DP Alt mode or charging/drawing more than 60W of power).
And when they upped the max voltage they didn't do it for preexisting cables, no matter what the design was.
> those features depend on compatibility from both sides of the plug
That's easy to understand. Cable supports (or doesn't support) device, it can't give new abilities to the device. It doesn't make labeling less valuable.
Of course I don't have any problems with the author writing the tool, because everyone should write what the heck they want and how they want it. But seeing it gets popular tells me that people have no idea what's going on.
I think you have a good point about why people say it was vibe coded.
It might also be because they want to join the trend -- without mentioning vibe coding, I don't think this tool would ever reach #1 on Hacker News.
Do you care about your binary code inside your application, or what exactly happen, in silicon level, when you write "printf("Hello World")" ?
I verify dynamic linking, ensure no superfluous dylibs are required. I verify ABI requirements and ensure a specific version of glibc is needed to run the executable. I double-check if the functions I care about are inlined. I consider if I use stable public or unstable private API.
But I don't mean that the author doesn't know what's going on in his snippet of code. I'm sure he knows what's going on there.
I mean that upvoters have no idea what's going on, by boosting vibe coding. People who upvote this are the reason of global software quality decline in near future.
It's all abstraction, we all need to not know some low level layer to do our job, so please stop gatekeeping it.
That we shouldn't care about spending $1 for a sandwich therefore managing home budget is pointless?
Different people will care different layers.
if I got a hold of the output and commands run, would gladly modify it.
On Linux that produces a lot of info similar to the macos screenshots, but with values and labels specific to the Linux USB stack.
I wonder if AI could map the linux lsusb output to a form your tool can use...
(/s)
https://github.com/tuna-f1sh/cyme
No Type-A, no Type-B, no Mini, no Micro...