i read in an earlier thread for this on HN - "this is a classic example of data driven product decision" aka we can reduce costs by $x if we just stopped goo.gl links. Instead of actually wondering how this would impact the customers.
Also helps that they are in a culture which does not mind killing services on a whim.
The Google URL shortener stopped accepting new links around 2018. It has been deprecated for a long time.
I doubt it was a cost-driven decision on the basis of running the servers. My guess would be that it was a security and maintenance burden that nobody wanted.
They also might have wanted to use the domain for something else.
The nature of something like this is that the cost to run it naturally goes down over time. Old links get clicked less so the hardware costs would be basically nothing.
As for the actual software security, it's a URL shortener. They could rewrite the entire thing in almost no time with just a single dev. Especially since it's strictly hosting static links at this point.
It probably took them more time and money to find inactive links than it'd take to keep the entire thing running for a couple of years.
My understanding from conversations I've seen about Google Reader is that the problem with Google is that every few years they have a new wave of infrastructure, which necessitates upgrading a bunch of things about all of their products.
I guess that might be things like some new version of BigTable or whatever coming along, so you need to migrate everything from the previous versions.
If a product has an active team maintaining it they can handle the upgrade. If a product has no team assigned there's nobody to do that work.
My understanding is that (at least at one point) binaries older than about six months were not allowed to run in production. But APIs are "evolving" irregularly so the longer you go between builds the more likely something is going to break. You really need a continuous build going to stay on top of it.
Best analogy I can think of is log-rolling (as in the lumberjack competition).
Google is famously a monorepo and is basically the gold standard of CI/CD.
What does happen is APIs are constantly upgraded and rewritten and deprecated. Eventually projects using the deprecated APIs need to be upgraded or dropped. I don't really understand why developers LOVE to deprecate shit that has users but it's a fact of life.
Second hand info about Google only so take it with a grain of salt.
Simple: you don't get promoted for maintaining legacy stuff. You do get promoted for providing something new that people adopt.
As such, developing a new API gets more brownie points than rebuilding a service that does a better job of providing an existing API.
To be more charitable, having learned lessons from an existing API, a new one might incorporate those lessons learned and be able to do a better job serving various needs. At some point, it stops making sense to support older versions of an API as multiple versions with multiple sets of documentation can be really confusing.
I'm personally cynical enough to believe more in the less charitable version, but it's not impossible.
I agree this is an overriding incentive that hurts customers & companies. I don't think there's an easy fix. Designing & creating new products require more relevant capabilities from employees for promotions then maintaining legacy code.
> I guess that might be things like some new version of BigTable or whatever coming along, so you need to migrate everything from the previous versions.
They deprecate internal infrastructure stuff zealously and tell teams they need to be off of such and such by this date.
But it's worse than that because they'll bring up whole new datacenters without ever bringing the deprecated service up, and they also retire datacenters with some regularity. So if you run a service that depends on deprecated services you could quickly find yourself in a situation where you have to migrate to maintain N+2 redundancy but there's hardly any datacenter with capacity available in the deprecated service you depend on.
Also, how many man years of engineering do you want to spend on keeping goo.gl running. If you were an engineer would you want to be assigned this project? What are you going to put in your perf packet? "Spent 6 months of my time and also bothered engineers in other teams to keep this service that makes us no money running"?
> If you were an engineer would you want to be assigned this project?
If you're high flying, trying to be the next Urs or Jeff Dean or Ian Goodfellow, you wouldn't, but I'm sure there's are many thousands of people who are able to do the job that would just love to work for Google and collect a paycheck on a $150k/yr job and do that for the rest of their lives.
I'd like to encourage you consider the following two perspectives --
1. A senior Google leader telling the shareholders "we've asked 1% of our engineers, that's 270 people, costing $80M/year, to work on services that produce no revenue whatsoever." I don't think it would pass that well.
2. A Google middle manager trying to figure out if an engineer working exclusively on non-revenue projects is actually being useful or otherwise; this is made more complex by about 30% of the workforce trying to go for the rest and vest option provided by these projects.
A lot of Google infra services are built around the understanding that clients will be re-built to pick up library changes pretty often, and that you can make breaking API changes from time to time (with lots of notice).
But if you don't downgrade the old, then you're endlessly supporting systems, forever. At some point, it does become cheaper to migrate everything to the new.
You know how Google deprecating stuff externally is a (deserved) meme? Things get deprecated internally even more frequently and someone has to migrate to the new thing. It's a huge pain in the ass to keep up with for teams that are fully funded. If something doesn't have a team dedicated to it eventually someone will decide it's no longer worth that burden and shut it down instead.
I think the concern is someone might scan all the inactive links and find that some of them link to secret URL's, leak design details about how things are built, link to documents shared 'anyone with the link' permission, etc.
> I think the concern is someone might scan all the inactive links
How? Barring a database leak I don't see a way for someone to simply scan all the links. Putting something like Cloudflare in front of the shortener with a rate limit would prevent brute force scanning. I assume google semi-competently made the shortener (using a random number generator) which would make it pretty hard to find links in the first place.
Removing inactive links also doesn't solve this problem. You can still have active links to secret docs.
> I doubt it was a cost-driven decision on the basis of running the servers. My guess would be that it was a security and maintenance burden that nobody wanted.
Yeah I can't imagine it being a huge cost saver? But guessing that the people who developed it long moved on, and it stopped being a cool project. And depending on the culture inside Google it just doesn't pay career-wise to maintain someone else's project.
I think the problem with URL shorteners like Google’s that includes the company name is that to the layperson there is possibly an implied level of safety.
Here is a service that basically makes Google $0 and confuses a non-zero amount of non-technical users when it sends them to a scam website.
Also, in the age of OCR on every device they make basically no sense. You can take a picture of a long URL on a piece of paper then just copy and paste the text instantly. The URL shortener no longer serves a discernible purpose.
Sure it can. It takes X people Y hours a day/month/week to perform tasks, including planning and digging up the context behind, related to this service. Those X people make Z dollars per year. It's an extremely simple math equation
Goo.gl didn't have customers, it had users. Customers pay, either with money or their personal data, now or the future. Goo.gl did not make any money or have a plan to do so in the future.
Why is it evil? If we assume that a free URL shortener is a good thing, and that shutting one down is a bad thing, and given that every link shortener will have costs (not just the servers -- constant moderation needs, as scammers and worse use them) and no revenue. The only possible outcome is for them all to eventually shut down, causing unrecoverable linkrot.
Given those options, an ad seems like a trivial annoyance to anyone who very much needs a very old link to work. Anyone who still has the ability to update their pages can always update their links.
Which raises the obvious question -- why make a service that you know will eventually be shut down because of said economics. Especially one that (by design) will render many documents unusable when it is shut down.
While I generally find the "killed by Google" thing insanely short-sighted, this borders on straight-up negligence.
The monetary value of the goodwill and mindshare generated by such a free service is hard to calculate, but definitely significant. I wouldn't be surprised if it was more than it costs to run.
I always figured most of the real value of these url hashing services was as an marketing tracking metric. That is, sort of equivalent to the "share with" widgets provided that conveniently also dump tons of analytics to the services.
I will be honest I was never in an environment that would benefit from link shortening, so I don't really know if any end users actually wanted them (my guess twitter mainly) and always viewed these hashed links with extreme suspicion.
One of the complaints about Google is that it's difficult to launch products due to bureaucracy. I'm starting to thing that's not a bad thing. If they'd done a careful analysis of the cost of jumping into this url-shortener bandwagon, we wouldn't be here. Maybe it's not a bad thing they move slower now.
I would bet that the salaries paid to the product managers behind shutting this down, during the time they worked on shutting it down, outweigh the annual cost of running the service by an order of magnitude.
If companies can spend billions on AI and not have anything in return and be okay with that in the ways of giving free stuff (okay, I'll admit not completely free since you are the product but still free)
Then they should also be okay for keeping the goo.gl links honestly.
Sounds kinda bad for some good will but this is literally google, the one thing google is notorious for is killing their products.
This is basically modern SV business. This old data is costing us about a million a year to hold onto. KILL IT NOW WITH FIRE.
Hey lets also dump 100 Billion dollars into this AI thing without any business plan or ideas to back it up this year. HOW FAST CAN YOU ACCEPT MY CHECK!
At this point, anyone depending on Google for anything deserves to get burned.
I don't know how much more clearly they could tell their users that Google has absolutely no respect for users without drone shipping boxes of excrement.
Can't dig this document up right now, but in their Chrome dev process they say something along these lines: "even if a ferie is used by 0.01% of users, at scale that's a lot of users . Don't remove until you've made solely due impost is negligible".
At Google scale I'm surprised [1] this is not applied everywhere.
Yup, 0.01% of users at scale is indeed a lot of users.
This is exactly why many big companies like Amazon, Google and Mozilla still support TLSv1.0, for example, whereas all the fancy websites would return an error unless you're using TLSv1.3 as if their life depends on it.
In fact, I just checked a few seconds ago with `lynx`, and Google Search even still works on plain old HTTP without the "S", too — no TLS required whatsoever to start with.
Most people are very surprised by this revelation, and many don't even believe it, because it's difficult to reproduce this with a normal desktop browser, apart from lynx.
But this also shows just out how out of touch Walmart's digital presence really is, because somehow they deem themselves to be important enough to mandate TLSv1.2 and the very latest browsers unlike all the major ecommerce heavyweights, and deny service to anyone who doesn't have the latest device with all the latest updates installed, breaking even the slightly outdated browsers even if they do support TLSv1.2.
So bizarre. Embedded links, docs, social posts, stuff that could be years and years old, and they're expecting traffic to them recently? Why do they seem to think their link shortener is only being used for like someone's social profile linktree or something. Some marketing person's bizarre view of how the web is being used.
"Actively used" criteria scrods that critical old document you found, in which someone trusted it was safe to use a Google link.
Not knowing all the details motivating this surprising decision, from the outside, I'd expect this to be an easy "Don't Be Evil" call:
"If we don't want to make new links, we can stop taking them (with advance warning, for any automation clients). But we mustn't throw away this information that was entrusted to us, and must keep it organized/accessible. We're Google. We can do it. Oddly, maybe even with less effort than shutting it down would take."
Using a link shortener for any kind of long-term link, no matter who hosts it, has never been a good idea. They're for ephemeral links shared over limited mediums like SMS or where a human would have to manually copy the link from the medium to the browsing device like a TV ad. If you put one in a document intended for digital consumption you've already screwed up.
Link shorteners are old enough that likely more URLs that were targeted by link shorteners have rotted away than have link shorteners themselves.
Go look at a decade+ old webpage. So many of the links to specific resources (as in, not just a link to a domain name with no path) simply don't work anymore.
I think it would be easy for these services to audit their link database and cull any that have had dead endpoints for more than 12 months.
That would come off far less user hostile than this move while still achieving the goal of trimming truly unnecessary bloat from their database. It also doesn't require you to keep track of how often a link is followed, which incurs its own small cost.
That actually seems just as bad to me, since the URL often has enough data to figure out what was being pointed to even if the exact URL format of a site has changed or even if a site has gone offline. It might be like:
kmart dot com / product.aspx?SKU=12345678&search_term=Staplers or /products/swingline-red-stapler-1235467890
Those URLs would now be dead and kmart itself will soon be fully dead but someone can still understand what was being linked to.
Even if the URL is 404, it's still possibly useful information for someone looking at some old resource.
We knew that. But it is very useful in documents that would be printed, especially if the original url is complicated. That is why one would not use a random url shortener, but Google's. After all, Google would never destroy those URLs, and the company will likely outlive us.
I'm completely serious, and I have a PhD thesis with such links to back it up. Just in some foootnotes, but still.
Yes, maybe this shows how naive we were/I was. But it definitely also shows how deep Google has fallen, that it had so much trust and completely betrayed it.
Yeah, when Google was founded, people acted like they were normal smart and benevolent and forward-thinking Internet techies (it was a type), and they got a lot of support and good hires because of that.
Then, even as that was eroding, they were still seen as reliable, IIRC.
The killedbygoogle reputation was more recent. And still I think isn't common knowledge among non-techies.
And even today, if you ask a techie which companies have certain reliability capabilities, Google would be at the top of some lists (e.g., keeping certain sites running under massive demand, and securing data against attackers).
> Oddly, maybe even with less effort than shutting it down would take.
Google has a number of internal processes that effectively make it impossible to run legacy code without an engineering team just to integrate breaking upstream API changes, of which there are many. Imagine Google as an OS, and every few years you need to upgrade from, say, Google 8 to Google 9, and there's zero API or ABI stability so you have to rewrite every app built on Google. Everyone is on an upgrade treadmill. And you can't decide not to get on that treadmill either because everything built at Google is expected to launch at scale on Google's shitty[0]-ass infrastructure.
[0] In the same sense that Intel's EDA tools were absolutely fantastic when they made them and are holding the company back now
The worst case is when this mentality of "just update your code" leaks out to the rest of us. I'm still scarred from some of the samesite shenanigans, breaking useful (not ads) boxed software because they figured everyone on the internet could "just update" their websites within six months of them putting out a dev blog post.
It's just not an accurate view of how the world works.
It may help prevent linkjacking. If an old URL no longer works, but the goo.gl link is still available, it's possible that someone could take over the URL and use it for malicious. Consider a scenario like this:
1. Years ago, Acme Corp sets up an FAQ page and creates a goo.gl link to the FAQ.
2. Acme goes out of business. They take the website down, but the goo.gl link is still accessible on some old third-party content, like social media posts.
3. Eventually, the domain registration lapses, and a bad actor takes over the domain.
4. Someone stumbles across a goo.gl link in a reddit thread from a decade ago and clicks it. Instead of going to Acme, they now go to a malicious site full of malware.
With the new policy, if enough time has passed without anyone clicking on the link, then Google will deactivate it, and the user in step 4 would now get a 404 from Google instead.
Goo.gl was a terrible idea in the first place because it lends Google's apparent legitimacy (in the eyes of the average "noob") to unmoderated content that could be malicious. That's probably why they at least stopped allowing new ones to be made. By allowing old ones, they can't rule out the Google brand being used to scam and phish.
e.g. Imagine SMS or email saying "We've received your request to delete your Google account effective (insert 1 hour's time). To cancel your request, just click here and log into your account: https://goo.gl/ASDFjkl
This was a very popular strategy for phishing and it's still possible if you can find old links that go to hosts that are NXDOMAIN and unregistered, of which there are no doubt millions.
Only insofar as Google might wish to prevent it since their brand was on the shortened url you clicked to get there. And people not having malware is surely good for Google indirectly.
Presumably ACME used the link shortener because they wanted to put the shortened link somewhere, so someone’s going to click things like these. If Google can just delete a lot of it why not?
It creates a good entry in the promo package for that Google manager. "Successfully conducted cost saving measure, cutting down the spend on the link shortener service by 70%". Of course, hoping that no one will check the actual numbers.
They’re not shutting down a product, they’re removing old links.
I’m not defending it, just that I can absolutely imagine Google PMs making a chart of “$ saved vs clicks” and everyone slapping each other on the back and saying good job well done.
I find even this incredibly stingy... Back of the envelope:
1043*1000000000 / (1023^3)
10 4 byte characters times 3 billion links, dividing by 1 GB of memory...
Roughly 111 GB of RAM.
Which is like nothing to a search giant.
To put that into perspective, my Desktop Computer's max Mobo memory is 128 GB, so saying it has to do with RAM is like saying they needed to shut off a couple servers...and save like maybe a thousand dollars.
This reeks of something else, if not just sheer ineptitude...
> Roughly 111 GB of RAM. Which is like nothing to a search giant.
You are forgetting job replication. A global service can easily have 100s of jobs on 10-20 datacenters.
Saving 111TiB of RAM can probably pay your salary forever. I think I paid mine with fewer savings while there. During covid there was a RAM shortage too enough to have a call to prefer trading CPU to save RAM with changes to the rule of thumb resource costs.
> A global service can easily have 100s of jobs on 10-20 datacenters.
There's obviously, something in between maintaining the latency with 20 datacenter, increasing the latency a bit reducing hosting to a couple $100 worth of servers, and setting the latency to infinity, which was the original plan.
I'm guessing that they ran out of leeway with small tweaks and found that breaking inactive links was probably a better way out. We don't know the hit rates of what they call inactive nor the real cost it takes to keep them around.
A service like this is probably on maintenance mode too, so simplifying it to use fewer resources probably makes sense, and I bet the PMs are happy about shorter links, since at some point you are better off not using a link shortener and instead just use a QR code in fear of inconvenience and typos.
I don't understand. For you to see the message, you have to click on the link. Your clicking on the link must mean that the link is active, since it is getting clicks. So why is the link being deactivated for being inactive?
If I had to guess it is possibly something to do with fighting crawlers/bots/etc triggering the detection? And running some kind of more advanced logic to try ensure it's really being used. Light captcha style.
I am pretty sure the terrible idea of putting the Google brand on something that can so easily be used for phishing is the reason they deprecated it in the first place. They should have used something without obvious branding.
The key difference is share.google, as you mentioned, is for Google controlled properties whereas goo.gl allowed shortening any arbitrary user provided URL. Which opened up a giant can of worms with Google implicitly lending its brand credibility to any URL used by a scammer, phisher or attacker.
How? I just tried each of the Share options for this thread in the desktop Share menu, and they all used the full URL. Including the QR code which I verified by saving as a PNG and scanning it outside of any Google app. I also haven't found any Share option in the iOS app either that doesn't use the full URL. But harder to test on mobile given the various permutations of sharing between random apps.
I would've imagined that the good will (or more likely, the lack of bad will) from _not_ doing this would've been worth the cost, considering I can't imagine this has high costs to run.
The same reason you did in the first place -- despite a ton people who saw the future saying you shouldn't -- is the reason why the next generation of people will do it despite you trying to warn them.
Any form of URL is at best a point in time reference.
Shortened or not, they change, disappear, get redirected, all the time. There was once an idea that a URL was (or should be) a permanent reference, but to the extent that was ever true it's long in the past.
The closest thing we might have to that is an Internet Archive link.
Otherwise, don't cite URLs. Cite authors, titles, keywords, and dates, and maybe a search engine will turn up the document, if it exists at all.
I think I might be doing a self plug here, so pardon me but I am pretty sure that I can create something like a link shortener which can last essentially permanent, it has to do with crypto (I don't adore it as an investment, I must make it absolutely clear)
But basically I have created nanotimestamps which can embed some data in nano blockchain and that data could theoretically be a link..
Now the problem is that the link would atleast either be a transaction id which is big or some sort of seed passphrase...
So no, its not as easy as some passphrase but I am pretty sure that nano isn't going to dissolve, last time I checked it has 60 nodes and anyone can host a node and did I mention all of this for completely free.. (I mean, there is no gas fees in nano, which is why I picked it)
I am not associated with the nano team and it would actually be sort of put their system on strain if we do actually use it in this way but I mean their system allows for it .. so why not cheat the system
Tldr: I am pretty sure that I can build one which can really survive a really long time, decentralized based link shortener but the trade off is that the shortened link might actually become larger than original link. I can still think of a way to actually shorten it though
Like I just thought that nano has a way to catalogue transactions in time so its theoretically possible that we can catalogue some transactions from time, and so basically its just the nth number of transaction and that n could be something like 1000232
and so it could be test.org/1000232 could lead to something like youtube rickroll. Could theoretically be possible, If literally anybody is interested, I can create a basic prototype since I am just so proud really that I created some decent "innovation" in some space that I am not even familiar with (I ain't no crypto wizard)
You can't address the risk that whoever owns the domain will stop renewing it, or otherwise stop making the web gateway available. Best-case scenario is that it becomes possible to find out what URL a shortened link used to point to, for as long as the underlying blockchain lasts, but if a regular user clicks on a link after the web gateway shuts down then they'll get an error message or end up on a domain squatting site, neither of which will provide any information about how to get where they want to go.
These days one can register a domain for ten years, and have it auto-renew with prefunded payments that are already sitting in the account. This is what I did for the URL shortener I am developing.
The same would have to be done for the node running the service, and it too has been prefunded with a sitting balance.
Granted, there still exist failure modes, and so the bus factor needs to be more than one, but the above setup can in all probability easily ride out a few decades with the original person forgetting about it. In principle, a prefunded LLM with access to appropriate tooling and a headless browser can even be put in charge to address common administrative concerns.
I mean yes the web gateway can shut, but honestly like atleast with goo.gl if things go down, then there is no way of recovering.
With the system I am presenting, I think that it can be possible to have a website like redirect.com/<some-gibberish> and even if redirect.com goes down then yes that link would stop working but what redirect.com is doing under the hood can be done by anybody so that being said,
it can be possible for someone to archive redirect.com main site which might give instructions which can give a recent list on github or some other place which can give a list of top updated working web gateways
And so anybody can go to archive.org, see that's what they meant and try it or maybe we can have some sort of slug like redirect.com/block/<random-gibberish> and then maybe people can then have it be understood to block meaning this is just a gateway (a better more niche word would help)
But still, at the end of the day there is some way of using that shortened link forever thus being permanent in some sense.
Like Imagine that someone uses goo.gl link for some extremely important document and then somehow it becomes inaccessible for whatever use case and now... Its just gone?
I think that a way to recover that could really help. But honestly, I am all in for feedback and since its 0 fees
and as such I would most likely completely open source it and neither am I involved in this crypto project, I most likely will earn nothing like ever even if I do make this, but I just hope that I could help in making the internet a little less like a graveyard with dead links and help in that aspect.
1) i think this means every link is essentially public? probably not ideal.
2) you don't actually want things to be permanent - users will inevitably shorten stuff strings didn't mean to / want to, so there needs to be a way to scrub them.
Yes I did address that part but honestly I can use the time of when it was sent into blockchain / transaction id which is generally really short as I said in the comment. I will hack a prototype tomorrow.
Data stored in a blockchain isn't any more permanent than data stored in a well-seeded SQLite torrent: it's got the same failure modes (including "yes, technically there are a thousand copies… somewhere; but we're unlikely to get hold of one any time in the next 3 years").
But yes, you have correctly used the primitives to construct a system. (It's hardly your fault people undersell the leakiness of the abstraction.)
Honestly, I agree with your point so wholeheartedly.
I was really into p2p technologies like iroh etc. and at a real fundamental level you are still trusting that someone won't just suddenly leave things so things can still very well go down... even in crypto
But I think compared to sqlite torrent, the part about crypto might be the fact that since there's people's real money involved (for the worse or for the better) it then becomes of absolute permanence that data stored in blockchain becomes permanent.. and like I said, I can use that 60 nodes for absolutely free due to absolutely 0 gas fees compared to Sqlite torrent.
Oh boy... I think I found the man that I can yap
about the idea that I got scrolling thorugh HN: link shortener in blockchain with 0 gas fees
Here is the comment since I don't want to spam the same comment twice, Have a nice day
If you make it read-only, maybe. If anyone can generate a link, wait for your hosting provider to shout at you and ask why there is so much spam/illegal content with your domain. The you realize you can't actually manage a service like this.
As we already have a PostgreSQL database server, thecost of running this is extremely low, and we aren't concerned about GDPR (etc) issues with using a third-party site.
I'm not sure how to ask this without being rude, so I'll just shoot. Is this example satire? https://0x.co/examples.html
What normal person would find this glove and result in it being returned to its owner? Even if "0x.co" was written too, I think most people wouldn't understand it to be a URL.
> We understand these links are embedded in countless documents, videos, posts and more, and we appreciate the input received.
How did they think the links were being used?
Also helps that they are in a culture which does not mind killing services on a whim.
I doubt it was a cost-driven decision on the basis of running the servers. My guess would be that it was a security and maintenance burden that nobody wanted.
They also might have wanted to use the domain for something else.
The nature of something like this is that the cost to run it naturally goes down over time. Old links get clicked less so the hardware costs would be basically nothing.
As for the actual software security, it's a URL shortener. They could rewrite the entire thing in almost no time with just a single dev. Especially since it's strictly hosting static links at this point.
It probably took them more time and money to find inactive links than it'd take to keep the entire thing running for a couple of years.
My understanding from conversations I've seen about Google Reader is that the problem with Google is that every few years they have a new wave of infrastructure, which necessitates upgrading a bunch of things about all of their products.
I guess that might be things like some new version of BigTable or whatever coming along, so you need to migrate everything from the previous versions.
If a product has an active team maintaining it they can handle the upgrade. If a product has no team assigned there's nobody to do that work.
Best analogy I can think of is log-rolling (as in the lumberjack competition).
What does happen is APIs are constantly upgraded and rewritten and deprecated. Eventually projects using the deprecated APIs need to be upgraded or dropped. I don't really understand why developers LOVE to deprecate shit that has users but it's a fact of life.
Second hand info about Google only so take it with a grain of salt.
As such, developing a new API gets more brownie points than rebuilding a service that does a better job of providing an existing API.
To be more charitable, having learned lessons from an existing API, a new one might incorporate those lessons learned and be able to do a better job serving various needs. At some point, it stops making sense to support older versions of an API as multiple versions with multiple sets of documentation can be really confusing.
I'm personally cynical enough to believe more in the less charitable version, but it's not impossible.
Arrival of new does not neccessitate migration.
Only departure of old does.
But it's worse than that because they'll bring up whole new datacenters without ever bringing the deprecated service up, and they also retire datacenters with some regularity. So if you run a service that depends on deprecated services you could quickly find yourself in a situation where you have to migrate to maintain N+2 redundancy but there's hardly any datacenter with capacity available in the deprecated service you depend on.
Also, how many man years of engineering do you want to spend on keeping goo.gl running. If you were an engineer would you want to be assigned this project? What are you going to put in your perf packet? "Spent 6 months of my time and also bothered engineers in other teams to keep this service that makes us no money running"?
If you're high flying, trying to be the next Urs or Jeff Dean or Ian Goodfellow, you wouldn't, but I'm sure there's are many thousands of people who are able to do the job that would just love to work for Google and collect a paycheck on a $150k/yr job and do that for the rest of their lives.
1. A senior Google leader telling the shareholders "we've asked 1% of our engineers, that's 270 people, costing $80M/year, to work on services that produce no revenue whatsoever." I don't think it would pass that well.
2. A Google middle manager trying to figure out if an engineer working exclusively on non-revenue projects is actually being useful or otherwise; this is made more complex by about 30% of the workforce trying to go for the rest and vest option provided by these projects.
This seems like a good eval case for autonomous coding agents.
You know how Google deprecating stuff externally is a (deserved) meme? Things get deprecated internally even more frequently and someone has to migrate to the new thing. It's a huge pain in the ass to keep up with for teams that are fully funded. If something doesn't have a team dedicated to it eventually someone will decide it's no longer worth that burden and shut it down instead.
How? Barring a database leak I don't see a way for someone to simply scan all the links. Putting something like Cloudflare in front of the shortener with a rate limit would prevent brute force scanning. I assume google semi-competently made the shortener (using a random number generator) which would make it pretty hard to find links in the first place.
Removing inactive links also doesn't solve this problem. You can still have active links to secret docs.
Back when it was made, shorteners were competing to see who could make the shortest URL, so I bet a brute force scan would find everything.
If they're have a (passwordless) URL they're not secret.
Yeah I can't imagine it being a huge cost saver? But guessing that the people who developed it long moved on, and it stopped being a cool project. And depending on the culture inside Google it just doesn't pay career-wise to maintain someone else's project.
Cloudflare offered to run it and Google turned them down:
https://x.com/elithrar/status/1948451254780526609
Edit: nevermind, I had no idea Dynamic Links is deprecated and will be shutting down.
It's a really ridiculous decision though. There's not a lot that goes into a link redirection service.
Here is a service that basically makes Google $0 and confuses a non-zero amount of non-technical users when it sends them to a scam website.
Also, in the age of OCR on every device they make basically no sense. You can take a picture of a long URL on a piece of paper then just copy and paste the text instantly. The URL shortener no longer serves a discernible purpose.
That way they'll make money, and they can fund the service not having to shut down, and there isn't any linkrot.
Given those options, an ad seems like a trivial annoyance to anyone who very much needs a very old link to work. Anyone who still has the ability to update their pages can always update their links.
While I generally find the "killed by Google" thing insanely short-sighted, this borders on straight-up negligence.
"Here's a permanent (*) link".
[*] Definitions of permanent may vary wildly.
I will be honest I was never in an environment that would benefit from link shortening, so I don't really know if any end users actually wanted them (my guess twitter mainly) and always viewed these hashed links with extreme suspicion.
Then they should also be okay for keeping the goo.gl links honestly.
Sounds kinda bad for some good will but this is literally google, the one thing google is notorious for is killing their products.
Hey lets also dump 100 Billion dollars into this AI thing without any business plan or ideas to back it up this year. HOW FAST CAN YOU ACCEPT MY CHECK!
For company running GCP and giving things like Colab TPUs free the costs of running a URL service would be trivial rounding number at best
Like other things spun down there must not be value in the links.
Can't dig this document up right now, but in their Chrome dev process they say something along these lines: "even if a ferie is used by 0.01% of users, at scale that's a lot of users . Don't remove until you've made solely due impost is negligible".
At Google scale I'm surprised [1] this is not applied everywhere.
[1] Well, not that surprised
This is exactly why many big companies like Amazon, Google and Mozilla still support TLSv1.0, for example, whereas all the fancy websites would return an error unless you're using TLSv1.3 as if their life depends on it.
In fact, I just checked a few seconds ago with `lynx`, and Google Search even still works on plain old HTTP without the "S", too — no TLS required whatsoever to start with.
Most people are very surprised by this revelation, and many don't even believe it, because it's difficult to reproduce this with a normal desktop browser, apart from lynx.
But this also shows just out how out of touch Walmart's digital presence really is, because somehow they deem themselves to be important enough to mandate TLSv1.2 and the very latest browsers unlike all the major ecommerce heavyweights, and deny service to anyone who doesn't have the latest device with all the latest updates installed, breaking even the slightly outdated browsers even if they do support TLSv1.2.
https://www.auslogics.com/en/articles/is-it-bad-that-google-...
Not only are things evolving internally within Google, laws are evolving externally and must be followed.
Not knowing all the details motivating this surprising decision, from the outside, I'd expect this to be an easy "Don't Be Evil" call:
"If we don't want to make new links, we can stop taking them (with advance warning, for any automation clients). But we mustn't throw away this information that was entrusted to us, and must keep it organized/accessible. We're Google. We can do it. Oddly, maybe even with less effort than shutting it down would take."
That someone made a poor decision to rely on anything made by Google.
Go look at a decade+ old webpage. So many of the links to specific resources (as in, not just a link to a domain name with no path) simply don't work anymore.
That would come off far less user hostile than this move while still achieving the goal of trimming truly unnecessary bloat from their database. It also doesn't require you to keep track of how often a link is followed, which incurs its own small cost.
That actually seems just as bad to me, since the URL often has enough data to figure out what was being pointed to even if the exact URL format of a site has changed or even if a site has gone offline. It might be like:
kmart dot com / product.aspx?SKU=12345678&search_term=Staplers or /products/swingline-red-stapler-1235467890
Those URLs would now be dead and kmart itself will soon be fully dead but someone can still understand what was being linked to.
Even if the URL is 404, it's still possibly useful information for someone looking at some old resource.
I'm completely serious, and I have a PhD thesis with such links to back it up. Just in some foootnotes, but still.
Yes, maybe this shows how naive we were/I was. But it definitely also shows how deep Google has fallen, that it had so much trust and completely betrayed it.
Then, even as that was eroding, they were still seen as reliable, IIRC.
The killedbygoogle reputation was more recent. And still I think isn't common knowledge among non-techies.
And even today, if you ask a techie which companies have certain reliability capabilities, Google would be at the top of some lists (e.g., keeping certain sites running under massive demand, and securing data against attackers).
Google has a number of internal processes that effectively make it impossible to run legacy code without an engineering team just to integrate breaking upstream API changes, of which there are many. Imagine Google as an OS, and every few years you need to upgrade from, say, Google 8 to Google 9, and there's zero API or ABI stability so you have to rewrite every app built on Google. Everyone is on an upgrade treadmill. And you can't decide not to get on that treadmill either because everything built at Google is expected to launch at scale on Google's shitty[0]-ass infrastructure.
[0] In the same sense that Intel's EDA tools were absolutely fantastic when they made them and are holding the company back now
It's just not an accurate view of how the world works.
Look at what happened to their search results over the years and you'll understand.
1. Years ago, Acme Corp sets up an FAQ page and creates a goo.gl link to the FAQ.
2. Acme goes out of business. They take the website down, but the goo.gl link is still accessible on some old third-party content, like social media posts.
3. Eventually, the domain registration lapses, and a bad actor takes over the domain.
4. Someone stumbles across a goo.gl link in a reddit thread from a decade ago and clicks it. Instead of going to Acme, they now go to a malicious site full of malware.
With the new policy, if enough time has passed without anyone clicking on the link, then Google will deactivate it, and the user in step 4 would now get a 404 from Google instead.
e.g. Imagine SMS or email saying "We've received your request to delete your Google account effective (insert 1 hour's time). To cancel your request, just click here and log into your account: https://goo.gl/ASDFjkl
This was a very popular strategy for phishing and it's still possible if you can find old links that go to hosts that are NXDOMAIN and unregistered, of which there are no doubt millions.
Presumably ACME used the link shortener because they wanted to put the shortened link somewhere, so someone’s going to click things like these. If Google can just delete a lot of it why not?
I’m not defending it, just that I can absolutely imagine Google PMs making a chart of “$ saved vs clicks” and everyone slapping each other on the back and saying good job well done.
1043*1000000000 / (1023^3)
10 4 byte characters times 3 billion links, dividing by 1 GB of memory...
Roughly 111 GB of RAM.
Which is like nothing to a search giant.
To put that into perspective, my Desktop Computer's max Mobo memory is 128 GB, so saying it has to do with RAM is like saying they needed to shut off a couple servers...and save like maybe a thousand dollars.
This reeks of something else, if not just sheer ineptitude...
You are forgetting job replication. A global service can easily have 100s of jobs on 10-20 datacenters. Saving 111TiB of RAM can probably pay your salary forever. I think I paid mine with fewer savings while there. During covid there was a RAM shortage too enough to have a call to prefer trading CPU to save RAM with changes to the rule of thumb resource costs.
There's obviously, something in between maintaining the latency with 20 datacenter, increasing the latency a bit reducing hosting to a couple $100 worth of servers, and setting the latency to infinity, which was the original plan.
A service like this is probably on maintenance mode too, so simplifying it to use fewer resources probably makes sense, and I bet the PMs are happy about shorter links, since at some point you are better off not using a link shortener and instead just use a QR code in fear of inconvenience and typos.
Apparently they measured it once by running a map-reduce or equivalent.
I don’t see why they couldn’t measure it again. Maybe they don’t want it to be gamed, but why?
But just a guess.
https://tracker.archiveteam.org/goo-gl/
Google's shortened goo.gl links will stop working next month - https://news.ycombinator.com/item?id=44683481 - July 2025 (219 comments)
Google URL Shortener links will no longer be available - https://news.ycombinator.com/item?id=40998549 - July 2024 (49 comments)
Is that the same shortening platform running it?
And also does this have something to do with the .gl TLD? Greenland? A redirect to share.google would be fine
I for one would think twice to rely too much on any of their services.
Shortened or not, they change, disappear, get redirected, all the time. There was once an idea that a URL was (or should be) a permanent reference, but to the extent that was ever true it's long in the past.
The closest thing we might have to that is an Internet Archive link.
Otherwise, don't cite URLs. Cite authors, titles, keywords, and dates, and maybe a search engine will turn up the document, if it exists at all.
- Tinyurl.com, launched in 2002, currently 23 years old
- Urly.it, launched in 2009, currently 16 years old
- Bitly.com, also launched in 2009
So yes, some services survived a long time.
[1]: https://en.wikipedia.org/wiki/Digital_object_identifier
I think I might be doing a self plug here, so pardon me but I am pretty sure that I can create something like a link shortener which can last essentially permanent, it has to do with crypto (I don't adore it as an investment, I must make it absolutely clear)
But basically I have created nanotimestamps which can embed some data in nano blockchain and that data could theoretically be a link..
Now the problem is that the link would atleast either be a transaction id which is big or some sort of seed passphrase...
So no, its not as easy as some passphrase but I am pretty sure that nano isn't going to dissolve, last time I checked it has 60 nodes and anyone can host a node and did I mention all of this for completely free.. (I mean, there is no gas fees in nano, which is why I picked it)
I am not associated with the nano team and it would actually be sort of put their system on strain if we do actually use it in this way but I mean their system allows for it .. so why not cheat the system
Tldr: I am pretty sure that I can build one which can really survive a really long time, decentralized based link shortener but the trade off is that the shortened link might actually become larger than original link. I can still think of a way to actually shorten it though
Like I just thought that nano has a way to catalogue transactions in time so its theoretically possible that we can catalogue some transactions from time, and so basically its just the nth number of transaction and that n could be something like 1000232
and so it could be test.org/1000232 could lead to something like youtube rickroll. Could theoretically be possible, If literally anybody is interested, I can create a basic prototype since I am just so proud really that I created some decent "innovation" in some space that I am not even familiar with (I ain't no crypto wizard)
The same would have to be done for the node running the service, and it too has been prefunded with a sitting balance.
Granted, there still exist failure modes, and so the bus factor needs to be more than one, but the above setup can in all probability easily ride out a few decades with the original person forgetting about it. In principle, a prefunded LLM with access to appropriate tooling and a headless browser can even be put in charge to address common administrative concerns.
With the system I am presenting, I think that it can be possible to have a website like redirect.com/<some-gibberish> and even if redirect.com goes down then yes that link would stop working but what redirect.com is doing under the hood can be done by anybody so that being said,
it can be possible for someone to archive redirect.com main site which might give instructions which can give a recent list on github or some other place which can give a list of top updated working web gateways
And so anybody can go to archive.org, see that's what they meant and try it or maybe we can have some sort of slug like redirect.com/block/<random-gibberish> and then maybe people can then have it be understood to block meaning this is just a gateway (a better more niche word would help)
But still, at the end of the day there is some way of using that shortened link forever thus being permanent in some sense.
Like Imagine that someone uses goo.gl link for some extremely important document and then somehow it becomes inaccessible for whatever use case and now... Its just gone?
I think that a way to recover that could really help. But honestly, I am all in for feedback and since its 0 fees and as such I would most likely completely open source it and neither am I involved in this crypto project, I most likely will earn nothing like ever even if I do make this, but I just hope that I could help in making the internet a little less like a graveyard with dead links and help in that aspect.
2) you don't actually want things to be permanent - users will inevitably shorten stuff strings didn't mean to / want to, so there needs to be a way to scrub them.
If you want to use blockchain for this, I advise properly using a dedicated new blockchain, not spamming the Nano network.
Data stored in a blockchain isn't any more permanent than data stored in a well-seeded SQLite torrent: it's got the same failure modes (including "yes, technically there are a thousand copies… somewhere; but we're unlikely to get hold of one any time in the next 3 years").
But yes, you have correctly used the primitives to construct a system. (It's hardly your fault people undersell the leakiness of the abstraction.)
But I think compared to sqlite torrent, the part about crypto might be the fact that since there's people's real money involved (for the worse or for the better) it then becomes of absolute permanence that data stored in blockchain becomes permanent.. and like I said, I can use that 60 nodes for absolutely free due to absolutely 0 gas fees compared to Sqlite torrent.
https://news.ycombinator.com/reply?id=44760545
As we already have a PostgreSQL database server, thecost of running this is extremely low, and we aren't concerned about GDPR (etc) issues with using a third-party site.
https://pinboard.in/
Fwiw, I wrote and hosted my own URL shortener, also embeddable in applications.
I don't know if anyone should use a URL shortener or not ... but if you do ...
"Oh By"[1] will be around in thirty years.
Links will not be "purged". Users won't be tracked. Ads won't be served.
[1] https://0x.co
How can you (or I) know that?
What normal person would find this glove and result in it being returned to its owner? Even if "0x.co" was written too, I think most people wouldn't understand it to be a URL.