Next: 15:48 Req#164194 HN    P    Q    Help Malvertising on Google Ads | Hacker News
Hacker News new | past | comments | ask | show | jobs | submit login
Malvertising on Google Ads (kolide.com)
185 points by notthatelaine 3 hours ago | hide | past | favorite | 118 comments





Not only ads malvertising, but also in the main search, through SEO techniques... We've been fighting this since more than 10 years on VLC..

And they refuse to act on numerous reports of the same issue, over and over, since 10 years... And the Safebrowsing initiative is a joke, since they always say "it is fine".

Badware people are often one step in advance...


> refuse to act on numerous reports of the same issue

FWIW, they're acting all the time. It's whack-a-mole with the malware providers.


They've allowed a prominent malware ad to appear on any search results for Blender for months despite numerous reports. They're not taking action on these bad actors.

https://old.reddit.com/r/blender/comments/105tht4/be_aware_o...


I just searched for Blender and I cannot reproduce this.

... but what I suspect happened is they got reports, took a few days to down the ad, the ad goes up under another URL, they get reports, take a few days to down the ad, etc. The malware vendors are tenacious and have a pretty much bottomless well of Turking for CAPTCHAs and backup accounts.

ETA: none of this to imply that Google shouldn't fix the problem or that they don't need to divert more resources to it (if for no other reason than it does actually threaten their bottom line if they can't get on top of it and people conclude it's not worth it to keep recommending Google search to naive users). But the problem's generally harder to fix than most people believe.


Okay, so malware actors create new accounts and try new ways. That’s no surprise. That doesn’t adequately explain or forgive the behavior by Google here. They’re one of the largest, most profitable enterprises in history, it’s no longer an excuse.

I agree.

There is one thing Google could do that would eliminate the vast majority of this sort of thing. They could require a manual review of all ads and advertisers before putting each ad into the pool. Like traditional media does.

But that doesn't scale, so it's not going to happen. But avoiding something because it doesn't scale is a deliberate choice, and I think it's fair to consider Google to be at fault for allowing this state of affairs to continue as it is.


That doesn't work because it's not the ads themselves that serve the malware, but the page the ads point to. Changing that after the review is done is trivial, and asking landing pages to never change is simply unreasonable for a vast number of reasons.

That's why I included "vet the advertisers". It's not just the ads that need to be examined, but the people putting the ads up.

What major city would you recommend Google employ at 100% to vet enough advertisers to support nearly 30 billion daily ad impressions?

Or, alternatively, should there be a few tens of thousands of firms allowed to advertise on the Internet and the rest of us can just pound sand?

(... actually, now that I think that "out loud," a distributed trust model would be an interesting idea. Google, instead of vetting ads, could vet trusted ad resellers, and knock entire resellers off the network that failed to do due diligence. The resellers would be responsible for policing their various houses and if you didn't like the terms one provided you could go to another. This is, perhaps, one of those situations where more middlemen would be desirable).


Why should we care that it's not profitable for Google to do so? I would argue they are facilitating illegal activities, so why shouldn't they financially (and maybe criminally) liable? If that destroys their business model, why should we care?

Adtech veteran here. That's not how the industry works.

All ads on major DSPs already require an approval step before they can run. Advertiser accounts too, especially at scale. While there are plenty of technical openings for fraud and malware, the vast majority is from known actors that can be resolved through business practices.

A trillion-dollar megacorporation with hundreds of thousands of employees has more than enough resources to handle this. The reason it doesn't is because of the flow of money and incentives across the vast supply chain from advertisers and agencies to vendors and publishers.


Adtech veteran here (from the other side). The trillion dollar corporation has vast teams and assets invested in this project. No temporary monetary incentive is worth the risk of being seen as a likelier vendor of malware than quality searches.

But the opposing operators get more and more sophisticated, countermeasures that work to half decade ago get circumvented, and the arms race continues.


I already mentioned that it doesn't scale.

The real issue, IMO, is that Google's business model is just fundamentally bad. But Google is large enough that it doesn't matter. They're like a large industrial polluter poisoning the lands and arguing that there's nothing they can effectively do about it because addressing the problem would be bad for their business.


Well, their business and the business of everyone that advertises online. So it comes back to "Should we all pound sand because of a (statistically) few bad actors?"

Firefox advertises at the top of "download browser." Should we cede their ability to be found to whoever Google thinks should be at the top of that organic result? Because by user numbers alone, it probably won't be Firefox!


I think a strong case can be made that if a business cannot operate without causing harm to unconsenting others, it should not be operating.

> because of a (statistically) few bad actors?

It doesn't actually matter how many or few bad actors there are. What matters is how much harm is being done.

I'm not sure what your point is about Firefox, but in general, it doesn't matter if mitigating the harm Google's ad system does adversely affects Firefox or any other advertiser.


Who are the unconsenting others? The people who chose to trust a Google ad?

I'm not sure what consent means if it doesn't mean "user clicked on a result after asking Google for results." The backstop here is the user doesn't come back because they got screwed by Google, not that some third-party makes that decision for people.

But yes, I suspect if Google can't get on top of this problem they'll lose their leadership position in search.


> I'm not sure what consent means if it doesn't mean "user clicked on a result after asking Google for results."

First, I'm talking about ads, not search results. Although Google conflates the two as much as they can get away with, and people often get confused as to which is which.

I can't imagine how clicking on an ad can be interpreted as consent to being exposed to malware. In order to be considered "consent", the person has to be fully and accurately informed of what they're being asked to consent to.

> The backstop here is the user doesn't come back because they got screwed by Google

I truly wish we lived in a world where that could be expected.


I don't see why we wouldn't expect it. Wouldn't be worth talking about if it wasn't a possibility.

Google dethroned search vendors before them and they aren't bulletproof. If they are, might as well pack up DDG and everything else right now. Shut off the servers and decrease the greenhouse gas emissions, right?

No, I think the backstop here is that DDG has a wonderful opportunity to be the search engine where you don't have to worry about getting vended malware via an ad above the first organic result.


They could make a bot that trawls the add URLs every so often and if it detects malware activity it could put a strike on the account associated with that ad. A few strikes, and they are banned and their ad account closed. It wouldn't be perfect, but it would help take out the worst offenders.

They have that and they currently use it. It does take out the worst offenders.

But Google had to down on the order of some million accounts in 2021. The crawlers hit rate is probably not enough to keep up with this problem.


I mean they _do_ have VirusTotal to compare hashes to. It's obviously not fool-proof but it's an option.

They could also offer complimentary priority ad space for non-profit open source projects (even if only projects heavily targeted by malvertising), so their information appears before the spam, maybe with a prominent special tag. I don't know if that would noticeably harm their advertising revenue.

That's how one might like to think traditional media works but then again the Lakers play in the FTX (previously Staples) Center, and plenty of other orgs have taken shady crypto money as well. It's not exactly like human review is a perfect solution.

Scams in reputable, real-life advertising are orders of magnitude less frequent than in online advertising.

It's certainly not a perfect solution. But it would be vastly superior to whatever they're doing now.

The problem is Google has no incentive to do it. Section 230 gives them blanket immunity. They make just as much money shipping malware as legitimate ads.

Charge Google a fine every time they serve a malicious ad and they will fix it.


I don't think this is a thing that section 230 covers.

It is. It's one of the reasons it's such an inept law: It refers to the concept of moderation in the context of "good faith" efforts, but fails to account for the influence of money in decisionmaking. This impacts all user-generated content, whether it be a social media post or an ad.

Except that advertising is already covered by existing regulations that 230 doesn't supercede. But IANAL, and I'll concede that I may be thinking of how it should be rather than how it is.

Nonetheless, all of my comments are engaging in wishful thinking. Google is a monster and I'm not sure anyone can tame it anytime soon.


The behavior by Google here is "Doing everything they can figure out how to do to get the malvertisers off their network without breaking the network itself." It appears that, in the short run, the malvertisers are winning the arms race.

... but if you have any ideas they haven't tried, I suspect they'd love to hear about it in a job interview for any of the openings for ad quality SWE.


This is not some impossibly intractable algorithmic problem to solve. Simply actioning malware reports on the ad would be sufficient. If an ad receives hundreds of reports over the course of months, there's problem something wrong with it and should trigger a human review, at which point the malicious intent of the ad is obvious. Why include a report button on ads at all if its effectively a placebo button?

What is your evidence that the malware reports aren't being actioned?

The malware continuing to appear isn't sufficient evidence. Malware moves hosts and ad accounts all the time.

ETA: from the article itself, in 2021 Google "Removed over 3.4 billion ads, restricted over 5.7 billion ads and suspended over 5.6 million advertiser accounts." That's a ton of action, but AdWords alone also serves 29 billion ad impressions a day. It doesn't take more than a few bad actors slipping through the cracks to get seen (and at these orders of magnitude, "a few" is still "millions." Completely impractical for human hand-review).


And yet despite these numbers, somehow the most prominent open source software is relentlessly impersonated and Google is the facilitator, FTA: https://twitter.com/wdormann/status/1616497407390355456

It's clear this will never be prioritized without regulation as scammers money is as good as anyone else's and open source projects cannot afford to sue Google to force action.


Someone else already said this it's not that hard they could simply do a manual review of ads, but that's obviously going to eat into their profits so they will not do it.

I think this really requires governments to step in. I mean one could easily argue that Google is facilitating fraud here, so maybe they should be liable?


The problem with gov't regulation is that the pages have a good chance of not being within the jurisdiction of what ever gov't is trying to do the regulating. So unless someone like Uncle Sam is going to say that ISPs must not peer with known places, there's no way that blocking access to these out-of-jurisdiction pages can be stopped.

The notion that is not hard to review millions of advertisers does not align with reality.

Why would I care if this easy for Google? I'm saying that we need to provide a government led "incentive". If Google becomes financially (criminally?) liable for the damage they cause with fraudulent ads, they would quickly implement a way to solve the problem. If I you mean it's difficult to regulate for the government, why? They don't need to find the fraudulent ads, someone who has been affected just needs to provide evidence and get a ruling against Google for "hosting" it.

You could make the same argument about disposing of toxic waste (it's definitely cheaper and easier to just dump it in the river than to deal with the "reality" of processing millions of litres of sludge).

I actually think the problems of dealing with toxic waste are far more tractable than the problems of vetting every ad in a network serving 30 billion impressions a day.

Toxic waste doesn't try and hide from the litmus paper or the geiger counter.


Yet there is still a cost to dealing with toxic waste, which encourages companies to not make any more of it than necessary. There's currently no cost (it's all profit, in fact) to promoting malicious ads in search results, so why wouldn't Google do it?

There is no reason they have to serve 30 billion impressions a day. If vetting takes that down to 1 billion, that's fine. Lower the volumes (and raise the prices to fund manual vetting) until the problem is resolved.

Toxic waste disposal is a solved problem thanks to (enforced!) regulations that force companies to do so under threat of heavy penalties, not altruism or the fact that the waste doesn't hide from a Geiger counter. We need the same for online advertising.


Perhaps the network needs to be broken, then.

It is sort of "whack-a-mole".

I see some shady ads right now via adsense on https://getpaint.net

Screenshot: https://imgur.com/a/WRvrddy

Someone will report them, and they will go away, then reappear from a different Adwords account. They don't seem to have a smarter heuristic sort of thing to reject ads that only say, for example "Download Now".


Why are they accepting ads for a product from anyone but the verified maker of the product in the first place? Surely there's budget in that river of ad money to do the most basic due diligence?

Being allowed to advertise on someone else's brand, trademark, etc. has been pretty much a cornerstone of online advertising since the birth of online advertising. It's justified as the way mom-and-pops have any hope at all of competing with big-box names; otherwise, Dan's Local Electronics couldn't show up on searches for Best Buy as a potential micro-targeted local alternative.

I'm not talking about advertising on someone else's brand, I'm talking about advertising AS someone else's brand, or malicious impersonation. The most basic vetting of advertising would catch this, but apparently this is not occurring.

Our observation of what is occurring doesn't match what is occurring.

Here's how the vetting you're imagining works:

1. The automated system goes to the advertised site. But Google's IPs are public knowledge, so the site vends a "safe" version to Google's checkers.

2. If Google sends a human being? Same story. That human's coming from a Google IP.

3. Google has a small set of non-Google IPs that they privately use for checking. This process seems to have broken down. My guess is malvertisers have caught wise and have managed to build a good list of those IPs to cloak against Google's back- and side-channel verifies too.

In terms of the actual ad copy: I suspect a lot of that is checked automatically, and the rest is often checked by contractors. So you're trying to solve the "Build an AI to understand when something is confusing" problem. There's probably room for improvement here, but it's not as surprising as I wish it were that stuff slips through the cracks at that layer.


> But Google's IPs are public knowledge

You're telling me that even though attackers of various sophistication are able to get clean, residential IPs all the time for nefarious purposes, Google can't do the same? Come on. It's not that they can't, it's that they don't care.


Is that extortion? Pay us or malware displaces you as the first thing people see?

That said, it’s pretty common to get a competitor ad above the top search result.


It's not "extortion" so much as "stupid." "Pay us or we'll convince people our search results are crap" is a really bad business model.

You know how many people will think “teamviewer or Firefox must have gotten hacked” before “google sent me to a bad site” OR not notice for days and have no idea where it came from?

Because in the vast majority if markets the manufacturer is not the reseller. This is even true in many places in the software market.

True, but it seems like in the case of Blender, there is no reseller, so an easy solution is to literally blacklist the word "Blender" (when it comes to software - I'm sure they have semantic analysis behind the scenes to differentiate Blender the 3D software vs a smoothie blender). The ban can be reversed if an official from the Blender project reaches out (if they ever need to advertise for example).

I came across this a few months ago (several ads offering their own downloads for blender from copycat sites). All their downloads were hosted on GitHub and had known viruses when uploaded to VirusTotal. I reported 3 of them to GitHub, but they only removed 2 of them immediately. Checking now, and the 3rd was finally removed, but it was left up for a while. Seems like searching for blender doesn't show me any ads until I scroll down for a while, so maybe they're temporarily fixing the issue by just not showing ads for blender? shrug

> FWIW, they're acting all the time. It's whack-a-mole with the malware providers.

Untrue, I can give you quite a few who have been there forever, by private message, if you want.


you should post publicly about this on a blog or social media if you can

That would just make Google take them down without solving the underlying root cause.

The longer he keeps them private and confirms they still exist, the more damning the evidence against Google's lies becomes.


We'll just advertise it on Google.


TBH I never see them. I don't know what charmed allow-list I'm on in Google's infra, but 98% of my attempts to repro these reports on Reddit, Mastodon, here, et. al (Incognito mode or no) fail.

This suggests to me that what people are generally seeing is churn, not lack of action (i.e. individual bad actors get taken out but they're up again soon).


I work on ads, but not for Google and FWIW, I've only been able to reproduce a few of these malvertising reports. However, I wouldn't be surprised if there were additional targeting parameters on these campaigns. Rather than targeting just anybody searching for VLC, Blender, or Audacity, these malvertisers want to target folks more likely to click a "download now" malvertisement. Maybe only target older users, non-developers, Windows users only, or a number of other facets that probably have a higher rate of installing malware. I have no knowledge if these folks are doing this, but that's what I'd do if I were a scummy advertiser shilling malware. If they can avoid wasting their ad budget on sophisticated users, I'm sure they will.

You searched websites, not windows executable downloads.

There’s malware above things like VLC, Zoom, Firefox, Malwarebytes, Teamviewer, all the time. For the better part of a decade, if not longer.


I literally hit each of those keywords just now and saw nothing of the sort.

So it's probably whack-a-mole problems.


Ad fraud is a political problem, not a technical or resource problem.

It can definitely be all three.

In the political dimension, the issue could be addressed by taking advertising away from the people as a service that is generally providable and restricting the right to advertise online to a few elite who have been vetted.


To clarify, I mean political in the sense of business politics and procedures, not legal. More importantly, there is no "right" to advertise nor is Google the only advertising system.

The power of advertising (as in buying influence) should absolutely require vetting and approvals. This is already done today, and comes in many layers as scale and budgets increase. The problem is profits that do not incentivize stopping these ads effectively.


It'd be interesting if there were some ad network that could use "social scoring" in a way that is analogous to Uber/Airbnb between riders-drivers, guests-hosts, etc.. Publishers could rate their advertisers for ads showing up on their site and advertisers could rate publishers they are being matched with.

In some way these scores could effect the search result ads that are shown.

Not saying Google necessarily would/should try this but some other smaller ad/search network.

I think it probably would work about the same as Uber/Airbnb, etc. - which is to say sort of working to at least get the most egregious offenders off the network with some annecodotal false positives.


I think the issue with that, is that some of the most common and profitable advertising is programmatic, like retargeting and lookalike campaigns. E.g. you search for a mattress and for a few days to weeks after, you see ads from two dozen different mattress companies everywhere on the web and social media. As a site owner, 1000 people look at your site and could see 1000 different advertisers targeting them as individuals. It's not realistic for you to rate 1000 different advertisers per day, nor will the ratings be helpful if tomorrow's visitors are being targeted by a whole different set of advertisers than today's. Any boutique ad network you create that doesn't allow programmatic advertising is going to have far fewer advertisers and far less money being spent, so publishers largely won't be interested in switching.

> some of the most common and profitable advertising is programmatic

Correct. Programmatic ads in general should not exist. There's no way to do them safely, or to do them without spying on everyone.


I think there is some huge missing gap for an offering that's something between:

- Youtube sponsorship where an advertiser/brand actually reaches out directly to each publisher/influencer

- Google ads where there is zero relationship between the two parties and most of the times ads that show up on your blog targeting a specific niche has no relation to your content


> Youtube sponsorship where an advertiser/brand actually reaches out directly to each publisher/influencer

Considering how many dodgy, unsafe, counterfeit or outright scam products I've seen advertised as sponsorships, I'm not sure this helps.


Example of prior HN post on topic:

- https://news.ycombinator.com/item?id=33727981


Since the dawn of search really.

I helped my mom install Zoom on a Macbook the other day. I typed "zoom download" or something into Google, mindlessly clicked the first link before seeing it was some garbage domain that was certainly not interested in simply helping me install Zoom.

I had to scroll down to like the 5th result (read: 1st real result, after 4 ads disguised as results) before I found the legitimate Zoom domain.


This is one of the reasons I put ublock origin on my relatives computers and phones. As soon as I see their browsers I see the current state of the internet and insist they give me 20 seconds to fix the problem.

Same and literally the default config blocks enough but doesn't really break anything. I personally activate nearly all blocklists, and enable some other settings, but i know how to deal with it when stuff breaks.

I've said it before, and I'll again: websites should be held responsible for the data they serve, either direct, or by embedding some ad-script that loads 253 other scripts.

Let's say I visit a Costco warehouse, and there's a 3rd party vendor there. He offers me a box of pans. I take those pans, the box breaks open and a 20 lbs pan falls on my foot breaking it.

Who is responsible? Costco? Or the vendor? Who do I have an implicit contract with when entering a Costco warehouse?

Same with Google. If the ad downloads malware, we should hold Google responsible.


> I've said it before, and I'll [say it] again: websites should be held responsible for the data they serve

If that were the case, HN, reddit, YouTube, Facebook, Wikipedia, etc. would all have to shut down. There are a bunch of illegal things posted on all websites with user-generated content -- copyright violations, hate speech, financial advice, advising people to kill themselves -- all of which are illegal. You're suggesting we make the website owner liable?

> I've said it before, and I'll [say it] again

Removing section 230 protection as you're suggesting would be such a radical change in the internet as we know it. This argument is so stale. Please stop saying it again and again.


The difference is that Google gets paid money to serve the ads - HN, etc don't.

So you could start by repealing Section 230 only in cases where there's a direct monetary cost to publish - this would spare all the free user-generated-content websites while clamping down on malicious companies profiting off serving illegal/harmful content.


What do you mean? Everyone gets paid to serve ads, right? Otherwise, why are you serving ads?

Not that I disagree with the accountability.


Section 230 requires them to act in good faith to moderate the content. It isn’t unreasonable to expect better from Google. The problem is they get paid by the malware distributors to help distribute malware but can’t afford to police it adequately?

> websites should be held responsible for the data they serve

Isn't this exactly the root of the section 230 debate?

> the box breaks open and a 20 lbs pan falls on my foot breaking it.

Insurance would cover it. If it keeps happening then costco's insurance premiums will be higher or they may be dropped as a customer.

I wonder if they'll try replacing 230 with something along these lines. Imagine having to get insurance in order to host a publicly facing website. Imagine not having insurance because you're just hosting a simple blog. Imagine someone accusing your site of giving them malware. What needs to be proven? By whom? Does someone have to pay for a forensic analysis of all systems involved? Is the alternative just settling out of court? Would this be abused?

This seems like a much more convoluted hell of a system. I recommend, if you don't trust google, don't use google.


Bad example but I get what you're saying and agree.

On the other hand: profit


> You won’t think too hard about clicking a Google ad because you have no reason to be suspicious of them–they’re just part of the background noise of your digital life.

USE

AN

ADBLOCKER

ALREADY

The FBI even recommends using an adblocker now:

https://www.tomsguide.com/news/the-fbi-now-recommends-using-...

Stop thinking about adblockers as being theft and starting thing about how exploitative the other side of the equation is. There's a whole lot of euphemisms for people who let themselves get exploited and if you've convinced yourself you're a better, more moral being because you don't use adblockers, then those euphemisms should really be applied to you. You are the sucker that is getting taken advantage of.

(And why the hell would I think I need to even state that on a site devoted to "Hackers" -- when did that term slide so far from phone phreaking down to bootlicking a $600 billion dollar ad market?)


There are lots and lots of scan ads on Youtube too. There are ads pretending to be Mr Beast offering to give you $1,000 for just clicking on the video (a lie, obviously - you're just directed to infinite scummy affiliate survey links, many of which are just as deceptive).

Or there's ads for GTA 6 which link you to god-knows what.

I used to report these ads almost daily but the truth is Google/Youtube/Alphabet just doesn't care as long as it gets the money. Only regulation can stop this sort of crap.


We had a close call with malvertising ourselves, so we wrote an osquery query to alert on .dmg/.iso/.pkg downloads from unknown sources:

https://github.com/chainguard-dev/osquery-defense-kit/blob/m...

This query should not be your only line of defense, but can provide an early heads up before the package is opened. You can deploy this query with Kolide, as it uses osquery under the hood.

It was once possible to have a query like this that worked on Linux using the user.xdg.origin.url extended file attribute, but Chromium dropped support for it in 2019 for privacy reasons: https://chromium.googlesource.com/chromium/src/+/a9b4fb70b43...


Paid search is one solution. (Kago is $10/mo.)

Another is switching to a smaller search engine that isn’t yet targeted by the same schemes yet.

When I browse sites that are deeply infected by Google ads, every single ad seems scammy. The internet is a hostile place. I think it was like this since the early 2000s.


I can't imagine browsing the web without an adblocker. And if I get a nag screen for running an adblocker, 9/10 times I will either circumvent it or walk away from the site if not mission critical to read it.

I'm sorry to websites but from my perspective ads are a failed monetization approach. Go back to the drawing board and come up with something new. Charge me $0.001 for each page view but don't fucking show me ads.


> ads are a failed monetization approach

If it's worth paying for a giant un-targeted poster next to a highway, it's worth paying for embedded (i.e. unblockable, equivalent to other site content) ads based solely on website content, not viewer tracking.

It's just that most sites/ad vendors don't want to, and are trying to gaslight us into thinking surveillance advertising is the only option.


I'd argue that >50% of bytes on the web are malicious. We're past the point where it still makes sense to attempt to load the page and just block the bad parts. Ad blockers are bringing a knife to a gun fight.

I'd like to find a way to crowd source an unauthorized CDN for just the good parts. Maybe the ads need to be rendered once by a server somewhere so we can extract the content from the page, but after that we ought to be able to gossip content that's been pre-stripped of ads.

The web of trust that would be needed to make the gossiping safe can also help us figure out who to pay.


I wouldn't call it "malicious," but way more than 50% of what you download is stuff you don't want. For example, a random NY Times op-ed has 6730 characters of text, bur just the initial request is 454kb, so 98.5% of what you downloaded is overhead. Some of that is formatting markup, but probably 95% of it is junk. And that's just the initial request, which in turn pulls in the images, tracking scripts, and ads. In the end, probably 99% of what you downloaded to read those 6730 characters is stuff you didn't want.

Is Kago noticeably better than other free search offerings with ublock? I like the idea of paid search, but I have a hard time justifying US$120/yr unless the product is clearly superior to what I have now.

Not the GP, but in my experience Kagi replaced Google well. I used Duck for quite some time, and I found Kagi to be noticeable better.

Google could easily do more; at least they could rigorously check ads for the most popular open source programs, the ones that are downloaded the most, and they could make sure that official sites for popular programs rank highly. If that costs them some revenue, it's going be a small hit, the scammers aren't giving Google that much money.

Advertising is about convincing people to do something that may not be in their best interests.

And this fits right in.


> you’d proceed to click on the first official-looking link you saw, even if it’s an ad

What? No. Why would anyone click on an ad?


The fact that this disgusting industry still exists suggests enough people are clicking them.

Because Google has A/B tested their ad badges to be subtle enough that most people miss them.

I never click on ads even if I know it's what I'm looking for, often it's the wrong page on the right website. Looks like I'm alone in this though.

It's wonderful that you're not affected by this, but sometimes people are in a hurry, or ill, or tipsy, or have poor eyesight, or have diminished cognition. Should we just toss them under the bus when they don't exercise perfect opsec?

You're not alone. I would never dare to click on an ad for anything, ever.

interesting that malware is now stealing 2FA desktop app credentials

those 2FA desktop apps should not exist in the first place

yeah it's annoying having to get your phone out, but having to get another device is sort of the point


> those 2FA desktop apps should not exist in the first place

They can exist but they should be called what they are: 1FA ; )


What do you do when your phone fails?

2fa can also be soft-defeated by simply using iMessage or messages.google.com, so sms codes go to the desktop machine you're trying to log in from. Does that mean we should eliminate services that connect to phone messaging?


> What do you do when your phone fails?

backup codes, a second enrolled device (maybe an old phone), a copy of the key stored offline

many different ways

> 2fa can also be soft-defeated by simply using iMessage or messages.google.com, so sms codes go to the desktop machine you're trying to log in from.

yes, a certain crappy type of "2fa" can be defeated if you choose to upload all your SMSes to a website in realtime

good luck getting my TOTP or U2F keys that way


I'm asking out of practical curiosity, what do you actually use for that? Google authenticator only works on one device, which is a single point of failure. Authy allows multiple devices, which enables both backups and the defeat you described. What are some other ways?

Crappy sms 2FA is, in my experience, completely unavoidable, because many critical services have that as the only option.


> I'm asking out of practical curiosity, what do you actually use for that?

I have used all three I gave you in my previous comment

(all my critical services now all use U2F though, which is vastly superior)

> Google authenticator only works on one device

you can scan the qr code on more than one device

you can also print the qr code out (or write the key down)

you can also export the entire list to another device inside google authenticator

no need for online storage of anything


> (all my critical services now all use U2F though, which is vastly superior)

That's cool, I don't know of a single service I use personally that supports it.


But what is the 2fa device when you try to connect to an app/website from your phone?

valid point, preferably don't have the passwords on the phone at all

on ios/android apps a least the walled garden plus universal sandbox makes stealing credentials quite difficult

vs. randomly downloaded .exe files on windows being able to take everything instantly


46 comments and not one has called out the fundamental issue: when you run software you have essentially giving the author access to your machine as you.

The problem is that every piece of software has way too much power, way more than they need. Apple with iOS has done a pretty good job (AFAICT) locking down what an App can do and there's _some_ of that on macOS. I don't know what Windows is doing. And of course, even it were perfect we'll still have vulnerable platforms for decades, but at least IT dept. can curb them.


I've found that the only safe-ish advice for my parents when they're downloading software is to click the link through Wikipedia. Obviously a bad actor could go edit the site to something malicious, but generally the site has accurate links.

I have told them for multiple years to not simply google "open office" and expect to get the result you want.


Seeing this go so rampant it's just best to block all ads everywhere

Google doesn't care. We need someone copyrighting malware as art, then let RIAA/MPAA et al. lawyers do their job.

/s?


Is this any different to the high ranking results for websites which host re-uploads of literally every windows installer for every piece of software and sometimes even include malware? That, as far as I am aware, has been an issue for at least 10-15 years already, if not more.

FWIW, Bing was worse last time I used it. Not only for FOSS, but for e.g. Google Chrome! First page of results were useless, top two were malware.

> Then you’d proceed to click on the first official-looking link you saw, even if it’s an ad.

Nope. I have made a conscious effort to never click on any result labeled as an ad for the past 20+ years, even if it appears to be exactly what I'm looking for. At this point it's actually subconscious.


The solution is to get an adblocker, ublock origin preferred.

Google has been serving you links to spyware-laden malversions of software for like 20+ years, how is this news?

I recently almost got scammed by Google malvertising in free android app for scanning QR codes as I was trying to pay for parking.

Fortunately my bank blocked the operation.

It's weird that Google has zero responsibility in those cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: