Next: 01:21 Req#6057 P Help Generated 20 Tue 00:32 Size: 36K Articles: 52 Next
23:51 Google Confirms the Nest Secure Has Been Discontinued
Google's Nest Secure alarm system, which was discussed on Slashdot for featuring an unlisted, disabled microphone, has been discontinued by Google, though it will continue functioning. Android Police reports: Google released the Nest Guard in 2017 as a simple security system with motion sensors and a keypad, but it never received an upgrade, even as other Nest devices were updated again and again. The product page for the Nest Guard on the Google Store was updated last week with a 'No longer available' message, possibly indicating it had been discontinued. Google later confirmed to Android Police that the Nest Guard will no longer be sold, but it will continue to work for people who have already bought it.
23:51 IKEA To Buy Back Used Furniture In Recycling Push
Last week, the BBC reported that IKEA, the world's biggest furniture business, is planning to launch a scheme to buy back your unwanted furniture you no longer need or want. From the report: Under the plan, it will offer vouchers worth up to 50% of the original price, to be spent at its stores. The "Buy Back" initiative will launch to coincide with Black Friday. "By making sustainable living more simple and accessible, Ikea hopes that the initiative will help its customers take a stand against excessive consumption this Black Friday and in the years to come," it said in reference to November 27, when lots of retailers offer discounts on their products. The international scheme will see customers given vouchers to spend at Ikea stores, the value of which will depend on the condition of the items they are returning. Customers must log the item they wish to return and will then be given an estimate of its value. "As new" items, with no scratches, will get 50% of the original price, "very good" items, with minor scratches, will get 40% and "well used," with several scratches, will get 30%. They should then return them -- fully assembled -- to the returns desk where they will be checked and the final value agreed. The offer, which will run in 27 countries, applies to furniture typically without upholstery, such as the famous Billy bookcases, chairs, stools, desks and dining tables. Ikea said that anything that cannot be resold will be recycled. Ikea plans to have dedicated areas in every store where people can sell back their old furniture and find repaired or refurbished furniture.
Minimum wage should be at least $15 so double that to $4-5B
> YouTube brought in $15 billion in ads revenue in 2019,
> At that point, you would begin to face very serious questions about whether "this whole YouTube thing" is even worth it, from a financial perspective.
Maybe, but even at that price they are still making money, just less of it. I understand that people might _like_ to make more money, but there should be a limit on how much one can cut corners and disclaim responsibility for outcomes, and if you don't have a sustainable business after taking costs into consideration, then you don't have a sustainable business.
> you have to let at least some videos onto the service without a human reviewing them
? Have to ? YouTube doesn't _have to_ exist at all if their costs (financial and social) outweigh their benefits. I guess this is a fundamental difference in perspective.
In any event it does probably make sense to develop trust in content creators so that it is no longer necessary to review everything, by manually vetting creators in the same way that any publisher chooses what they publish, and not recommending or monetizing content that isn't vetted. My guess is that advertisers would find the service more valuable if they had more confidence on the kind of content their ads would run on, and malicious actors would find less value if they couldn't monetize their content. One could go as far as charging hosting fees for non-vetted, monetized uploads, restricting distribution by default until content is vetted, maybe with two levels of community standards, a high standard for the content that YouTube wants to promote (because they are under no obligation to promote and monetize everything), and a more relaxed standard for personal content.
> > Commercial companies could solve this for themselves. It just takes money. And people. The latter is the problem, big tech is trying very hard to rid themselves of humans unless they perform piecework as meat robots. See also content moderation hellscape.
> I'm saying "just throw humans/money at the problem" is not a complete, workable solution all by itself.
I very much doubt that the OP, in their brief message, intended to convey that literally spending money hiring a bunch of people for content moderation would magically solve everything by itself, and that no other action could be recommended, so that seems a bit of a straw man. All the questions you ask are solvable problems, and spending significant money on content moderation and licensing of content is a pre-requisite to solving them (moderation AI has not worked), the fact that Alphabet and YouTube are choosing not to solve them because they like keeping the money and don't mind the pollution they are enabling, does not mean that we should just throw up our hands and give up.
Doubling his costs brings in my reply - if you double the cost of content moderation, then smaller platforms like Vimeo will spend over 100% of revenue on moderation. Yes, you've dealt with the limited moderation on YouTube, but by doing so, you've set a standard that no other platform can keep up with, thus ensuring that YouTube's monopoly is permanent.
Now, maybe this is the desired end state - but if I were Alphabet, I would be wary of doing something so anti-competitive without a legislative compulsion. At least if they limit moderation to what Vimeo et al can afford, competition can in theory fix certain classes of error on the part of YouTube's moderators. If you insist on moderation that costs Vimeo more than 100% of revenue, then should YouTube err on the side of caution, there's now no alternative host to run to.
23:24 Juice packs
Ars Technica article about Juicero from April 2017 (with images). The company folded about five months later.
P.S. "Juice packets" are not related to network packets! 😃
I chose Vimeo for a reason; they charge all but the smallest uploaders for accounts (unless you pay, you're limited to 5 GiB of video storage and 10 uploads totalling no more than 500 MiB per week), and their ARPU is low enough that they'd have to massively hike prices to afford the sort of moderation that was described.
In a competitive market, it's a fair bet that everyone has set their prices to maximise their own profits (which starts by maximising revenue). If moderation is going to put a majority of players out of business, and leave us all at the mercy of Alphabet and Facebook's decisions, that's not exactly selling the idea...
That's a BS argument, sorry.
If the newly added user space API is really **that** contentious, then by the time the maintainer is going to merge the changes, he asks for a final iteration with proper documentation. Simply because if it's not done at that moment, it might never be done in the future.
Stop justifying bad behavior please.
23:24 OpenWrt and SELinux
So ubusd, logd and ntpd can run as non-root without being to complicated about it.
uhttpd is kinda not my department... For dropbear I currently don't see a reasonable option which would allow it to run as non-root.
Neither another fix (https://lore.kernel.org/linux-bluetooth/20201016180956.70...) also referred to above.
The public documents do not list net income for YouTube alone, so it's hard to say whether this is actually true. You would have to add subscription income (for services like YouTube TV) and subtract other costs (such as servers, fiber, etc. as I mentioned). I don't think all of those numbers are public.
> ? Have to ? YouTube doesn't _have to_ exist at all if their costs (financial and social) outweigh their benefits. I guess this is a fundamental difference in perspective.
No, it's not a fundamental difference in perspective. I just took "or else YouTube wouldn't exist at all" as a given. Perhaps I should have been more explicit about that. Obviously, "shut down YouTube" will solve all problems arising out of YouTube's existence; I did not realize I needed to say so explicitly.
> In any event it does probably make sense to develop trust in content creators so that it is no longer necessary to review everything, by manually vetting creators in the same way that any publisher chooses what they publish, and not recommending or monetizing content that isn't vetted.
Then you're not really YouTube any more. You're Netflix, or Hulu. IMHO that's in the same territory as YouTube not existing, since it would be a fundamentally different service, and one which other people are already selling.
(For the record, YouTube actually does sell licensed content alongside its free or ad-supported user-generated content.)
> I very much doubt that the OP, in their brief message, intended to convey that literally spending money hiring a bunch of people for content moderation would magically solve everything by itself, and that no other action could be recommended, so that seems a bit of a straw man.
You're probably right. But I hear this argument so often, from so many different people, and so rarely with any kind of nuance, that I felt the need to make my point anyway.
> All the questions you ask are solvable problems
Yes, they probably are.
> and spending significant money on content moderation and licensing of content is a pre-requisite to solving them (moderation AI has not worked)
I agree that AI is no better than humans, and that it is actively causing problems right now. However, AI that is properly designed should not be too much *worse* than humans, in statistical aggregate (because if your machines are statistically different from your humans, then your ML algorithm is poorly-trained). In my view, the real problem is that, when the AI tells you "no," there is often no one to escalate to, whereas when a human tells you "no," you very often can escalate to another human. YouTube ought to fix that problem, and I agree that the solution probably should involve hiring more humans in some capacity. Of course, if you give every user the right to escalate to a human every time the AI says "no," then you might as well escalate to the human automatically, at which point you're going to be hiring a lot of humans. So it probably ends up being more complicated than that.
(I'm deliberately not expressing an opinion on whether YouTube's AI is "properly designed" because I have not worked on it and have no idea whether this is the case.)
> the fact that Alphabet and YouTube are choosing not to solve them because they like keeping the money and don't mind the pollution they are enabling, does not mean that we should just throw up our hands and give up.
I didn't say that we should give up. But, as I mentioned, your proposal would take YouTube out of the realm of a user-generated content service and into the realm of a curated content service. Maybe user-generated content just doesn't work, or maybe there are other solutions that would allow it to continue existing. I really don't know the best way out of this hole, but I do know that it's a harder problem than is usually appreciated. That was all I wanted to convey; I genuinely *don't know* what we should do about this hard problem.
What makes you think that any hosting provider's ToS will be any better? After all, the host is liable for whatever the user "publishes" in this new regime.
No, what will actually happen is that user-generated content will get shut down, hard, across the board. And only folks with deep pockets will be able to afford to play.
Mission accomplished, I guess.
If you really think that effective content moderation would fundamentally destroy the service, that hosting harmful and fraudulent content is so core the the business model that kicking that off the platform will leave them unable to differentiate from Netflix, well then I guess you are very sympathetic to the current management of YouTube who also are unwilling to moderate, but are fundamentally arguing for it to be shutdown if the negative social costs of its operation are accounted for.
> However, AI that is properly designed should not be too much *worse* than humans, in statistical aggregate (because if your machines are statistically different from your humans, then your ML algorithm is poorly-trained).
I'm just going to assume that the AIs which moderate content and recommend videos are competently built, and that the lack of good results is because the approach is fundamentally wrong and the incentives are wrong (prioritizing watch time and revenue over quality and safety)
> Maybe user-generated content just doesn't work
I mean, without setting and enforcing some standard it really doesn't, every place will eventually turn into 4chan/8kun or whatever if you don't reliably, aggressively and consistently kick people off when they misbehave. Those people who are no longer welcome can always go make their own service (eg. Parler or whatever)
> That was all I wanted to convey; I genuinely *don't know* what we should do about this hard problem.
It's probably going to require changes to the terms of service it's probably going to make less money, once you accept that is a possibility then it's easier to conceive of ways to reduce the size of the problem, fixing this without changing anything is impossible.
> I'm just going to assume that the AIs which moderate content and recommend videos are competently built, and that the lack of good results is because the approach is fundamentally wrong and the incentives are wrong (prioritizing watch time and revenue over quality and safety)
Unfortunately, at present, it's a case of "what humans can design, humans can circumvent". All the evidence is that humans will always be able to game the system. Look at the current worries over deep fakes. An un-gameable system is currently beyond our capabilities ...
23:24 Juice packs
They say that no battle plan survives five minutes contact with the enemy, but without a plan you don't stand a chance.
So, pray tell, what is "harmful" content? And why is your definition better than, say, mine?
Or why is the United States' definition better than, say, China's?
Even "fraudulent content" is completely okay most of the time when it's called "advertising" or even "propaganda"
You conveniently ignore hosting providers being targeted solely because it's their hardware. Or the network provider.
22:21 Instagram's Handling of Kids' Data Is Now Being Probed In the EU
Facebook's lead data regulator in Europe has opened another two probes into its business empire -- both focused on how the Instagram platform processes children's information. TechCrunch reports: The action by Ireland's Data Protection Commission (DPC), reported earlier by the Telegraph, comes more than a year after a U.S. data scientist reported concerns to Instagram that its platform was leaking the contact information of minors. David Stier went on to publish details of his investigation last year -- saying Instagram had failed to make changes to prevent minors' data being accessible. He found that children who changed their Instagram account settings to a business account had their contact info (such as an email address and phone number) displayed unmasked via the platform -- arguing that "millions" of children had had their contact information exposed as a result of how Instagram functions. Facebook disputes Stier's characterization of the issue -- saying it has always made it clear that contact info is displayed if people choose to switch to a business account on Instagram. It also does now let people opt out of having their contact info displayed if they switch to a business account. Nonetheless, its lead EU regulator has now said it has identified "potential concerns" relating to how Instagram processes children's data. "The DPC has been actively monitoring complaints received from individuals in this area and has identified potential concerns in relation to the processing of children's personal data on Instagram which require further examination," it writes. The regulator's statement specifies that the first inquiry will examine the legal basis Facebook claims for processing children's data on the Instagram platform, and also whether or not there are adequate safeguards in place. [...] The DPC says the second inquiry will focus on the Instagram profile and account settings -- looking at "the appropriateness of these settings for children." "Amongst other matters, this Inquiry will explore Facebook's adherence with the requirements in the GDPR in respect to Data Protection by Design and Default and specifically in relation to Facebook's responsibility to protect the data protection rights of children as vulnerable persons," it adds. A Facebook company spokesperson said in a statement: "We've always been clear that when people choose to set up a business account on Instagram, the contact information they shared would be publicly displayed. That's very different to exposing people's information. We've also made several updates to business accounts since the time of Mr. Stier's mischaracterization in 2019, and people can now opt out of including their contact information entirely. We're in close contact with the IDPC and we're cooperating with their inquiries."
22:21 Chess's Cheating Crisis: 'Paranoia Has Become the Culture'
An anonymous reader quotes a report from The Guardian: In one chess tournament, five of the top six were disqualified for cheating. In another, the doting parents of 10-year-old competitors furiously rejected evidence that their darlings were playing at the level of the world No 1. And in a third, an Armenian grandmaster booted out for suspicious play accused his opponent of "doing pipi in his Pampers." These incidents may sound extreme but they are not isolated -- and they have all taken place online since the start of the coronavirus pandemic. Chess has enjoyed a huge boom in internet play this year as in-person events have moved online and people stuck at home have sought new hobbies. But with that has come a significant new problem: a rise in the use of powerful chess calculators to cheat on a scale reminiscent of the scandals that have dogged cycling and athletics. One leading 'chess detective' said that the pandemic was "without doubt creating a crisis". At the heart of the problem are programs or apps that can rapidly calculate near-perfect moves in any situation. To counter these engines, players in more and more top matches must agree to be recorded by multiple cameras, be available on Zoom or WhatsApp at any time, and grant remote access to their computers. They may not be allowed to leave their screens, even for toilet breaks. In some cases they must have a "proctor" or invigilator search their room and then sit with them throughout a match. [E]ye-tracking programs may be a way to raise a red flag if a player appears to be looking away with suspicious frequency. Chess.com, the world's biggest site for online play, said it had seen 12 million new users this year, against 6.5 million last year. The cheating rate has jumped from between 5,000 and 6,000 players banned each month last year to a high of almost 17,000 in August. The growth in cheating and a corresponding explosion in social media discussion of the problem has created a new atmosphere of suspicion and recrimination. "Paranoia has become the culture," said Le-Marechal, whom a friend declared "the cyber chess detective" when he got the job. "There is this very romantic vision of the game which is being scuppered." Without a significant culture change, most say, the cheats are unlikely to go straight.
21:28 We asked, you told us: Most of you think the OnePlus 7T is still a great buy
The OnePlus 7T might be a year old, it looks like now is as good a time as any to buy it.
21:28 The Galaxy S30 in January 2021 seems real, and more tech news today
Expect the next Galaxy S flagship to be announced in Jan 2021 but with huge camera bump, plus more tech news you need to know!
21:28 Sony Xperia 5 II review second opinion: Heading in the right direction
The Sony Xperia 5 II isn't perfect, but it's a very good phone and an even more promising sign of things to come from Sony.
21:28 OnePlus is forcing the Amazon app on the OnePlus 8T (Update: OnePlus response)
Update: OnePlus has clarified the reason for seeding the Amazon app on the OnePlus 8T via an OTA update.
21:28 Xiaomi Mi 10T series is here: All you need to know about the budget flagships
All you need to know about the Xiaomi Mi 10T, Mi 10T Pro, and Mi 10T Lite.
21:28 Poll: What do you think of the leaked Samsung Galaxy S30 design?
You've seen the first renders, now tell us what you think.
21:28 Early Pixel 5 owners report gaps between the screen and body
It could let dust or water inside in the wrong circumstances.
21:28 LastPass free vs premium: Is it worth the upgrade?
LastPass is one of the most popular password manager apps. See whether LastPass Free or Premium is better for you!
21:28 Google overhauls the interface on Smart Displays
A dark mode, multi-account support and more at-a-glance info are part of the ugprade.
21:28 Google discontinues its languishing Nest Secure alarm system
It never really garnered much attention ‚Ē for the right reasons, at least.