By continuing to use the site or forum, you agree to the use of cookies, find out more by reading our GDPR policy

Executives at Instagram are planning to build a version of the popular photo-sharing app that can be used by children under the age of 13, according to an internal company post obtained by BuzzFeed News. “I’m excited to announce that going forward, we have identified youth work as a priority for Instagram and have added it to our H1 priority list,” Vishal Shah, Instagram’s vice president of product, wrote on an employee message board on Thursday. “We will be building a new youth pillar within the Community Product Group to focus on two things: (a) accelerating our integrity and privacy work to ensure the safest possible experience for teens and (b) building a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time.” The current Instagram policy forbids children under the age of 13 from using the service. According to the post, the work would be overseen by Adam Mosseri, the head of Instagram, and led by Pavni Diwanji, a vice president who joined parent company Facebook in December. Previously, Diwanji worked at Google, where she oversaw the search giant’s children-focused products, including YouTube Kids. The internal announcement comes two days after Instagram said it needs to do more to protect its youngest users. Following coverage and public criticism of the abuse, bullying, or predation faced by teens on the app, the company published a blog post on Tuesday titled “Continuing to Make Instagram Safer for the Youngest Members of Our Community.” That post makes no mention of Instagram’s intent to build a product for children under the age of 13, but states, “We require everyone to be at least 13 to use Instagram and have asked new users to provide their age when they sign up for an account for some time.” The announcement lays the groundwork for how Facebook — whose family of products is used by 3.3 billion people every month — plans to expand its user base. While various laws limit how companies can build products for and target children, Instagram clearly sees kids under 13 as a viable growth segment, particularly because of the app’s popularity among teens. In a short interview, Mosseri told BuzzFeed News that the company knows that “more and more kids” want to use apps like Instagram and that it was a challenge verifying their age, given most people don’t get identification documents until they are in their mid-to-late teens. “We have to do a lot here,” he said, “but part of the solution is to create a version of Instagram for young people or kids where parents have transparency or control. It’s one of the things we’re exploring.” Mosseri added that it was early in Instagram’s development of the product and that the company doesn’t yet have a “detailed plan.” Priya Kumar, a Ph.D. candidate at the University of Maryland who researches how social media affects families, said a version of Instagram for children is a way for Facebook to hook in young people and normalize the idea “that social connections exist to be monetized.” “From a privacy perspective, you're just legitimizing children’s interactions being monetized in the same way that all of the adults using these platforms are,” she said. Kumar said children who use YouTube Kids often migrate to the main YouTube platform, which is a boon for the company and concern for parents. “A lot of children, either by choice or by accident, migrate onto the broader YouTube platform,” she said. “Just because you have a platform for kids, it doesn’t mean the kids are going to stay there.” The development of an Instagram product for kids follows the 2017 launch of Messenger Kids, a Facebook product aimed at children between the ages of 6 and 12. After the product’s launch, a group of more than 95 advocates for children’s health sent a letter to Facebook CEO Mark Zuckerberg, calling for him to discontinue the product and citing research that “excessive use of digital devices and social media is harmful to children and teens, making it very likely this new app will undermine children’s healthy development.” Facebook said it had consulted an array of experts in developing Messenger Kids. Wired later revealed that the company had a financial relationship with most of the people and organizations that had advised on the product.Details can be found on OUR FORUM.
Google is going it alone with its proposed advertising technology to replace third-party cookies. Every major browser that uses the open-source Chromium project has declined to use it, and it’s unclear what that will mean for the future of advertising on the web. A couple of weeks ago, Google announced it was beginning to test a new ad technology inside Google Chrome called the Federated Learning of Cohorts or FLoC. It uses an algorithm to look at your browser history and place you in a group of people with similar browsing histories so that advertisers can target you. It’s more private than cookies, but it’s also complicated and has some potential privacy implications of its own if it’s not implemented right. Google Chrome is built on an open-source project, and so FLoC was implemented as part of that project that other browsers could include. I am not aware of any Chromium-based browser outside of Google’s own that will implement it and very aware of many that will refuse. One note I’ll drop here is that I am relieved that nobody else is implementing FLoC right away, because the way FLoC is constructed puts a very big responsibility on a browser maker. If implemented badly, FLoC could leak out sensitive information. It’s a complicated technology that does appear to keep you semi-anonymous, but there are enough details to hide dozens of devils. Anyway, here’s Brave: “The worst aspect of FLoC is that it materially harms user privacy, under the guise of being privacy-friendly.” And here’s Vivaldi: “We will not support the FLoC API and plan to disable it, no matter how it is implemented. It does not protect the privacy and it certainly is not beneficial to users, to unwittingly give away their privacy for the financial gain of Google.” As you probably know, Opera has a long history of introducing privacy features that benefit our users: it was the first major browser to introduce built-in ad blocking, browser VPN and other privacy-centric features. The significance now is the end of third-party cookies, which will reduce the amount of cross-website tracking on the web. While we and other browsers are discussing new and better privacy-preserving advertising alternatives to cookies including FloC, we have no current plans to enable features like this in the Opera browsers in their current form. Generally speaking, we do, however, think it’s too early to say in which direction the market will move or what the major browsers will do. DuckDuckGo isn’t thought of as a browser, but it does make browsers for iOS and Android. On desktop, it’s already made a browser extension for other browsers to block it. And the Electronic Frontier Foundation, which is very much against FLoC, has even made a website to let you know if you’re one of the few Chrome users who have been included in Google’s early tests. But maybe the most important Chromium-based browser not made by Google is Microsoft Edge. It is a big test for Google’s proposed FLoC technology: if Microsoft isn’t going to support it, that would pretty much mean Chrome really will be going it alone with this technology. As for Apple’s Safari, I will admit I didn’t reach out for comment because at this point it’s not difficult to guess what the answer will be. Apple, after all, deserves some credit for changing everybody’s default views on privacy. However, the story here is actually much more interesting than you might guess at first. John Wilander is a WebKit engineer at Apple who works on Safari’s privacy-enhancing Intelligent Tracking Prevention features. Wilander’s reply jibes with Microsoft’s statement that “the industry is on a journey” when it comes to balancing new advertising technologies and privacy. But it speaks to something really important: web standards people take their jobs seriously and are seriously committed to the web standards process that creates the open web. Read this posting in its entirety on OUR FORUM.

Over the last few years, researchers have found a shocking number of vulnerabilities in seemingly basic code that underpins how devices communicate with the Internet. Now, a new set of nine such vulnerabilities are exposing an estimated 100 million devices worldwide, including an array of Internet-of-things products and IT management servers. The larger question researchers are scrambling to answer, though, is how to spur substantive changes—and implement effective defenses—as more and more of these types of vulnerabilities pile up. Dubbed Name:Wreck, the newly disclosed flaws are in four ubiquitous TCP/IP stacks, code that integrates network communication protocols to establish connections between devices and the Internet. The vulnerabilities, present in operating systems like the open source project FreeBSD, as well as Nucleus NET from the industrial control firm Siemens, all relate to how these stacks implement the “Domain Name System” Internet phone book. They all would allow an attacker to either crash a device and take it offline or gain control of it remotely. Both of these attacks could potentially wreak havoc in a network, especially in critical infrastructure, health care, or manufacturing settings where infiltrating a connected device or IT server can disrupt a whole system or serve as a valuable jumping-off point for burrowing deeper into a victim's network. All of the vulnerabilities, discovered by researchers at the security firms Forescout and JSOF, now have patches available, but that doesn't necessarily translate to fixes in actual devices, which often run older software versions. Sometimes manufacturers haven't created mechanisms to update this code, but in other situations they don't manufacture the component it's running on and simply don't have control of the mechanism. “With all these findings, I know it can seem like we’re just bringing problems to the table, but we're really trying to raise awareness, work with the community, and figure out ways to address it,” says Elisa Costante, vice president of research at Forescout, which has done other, similar research through an effort it calls Project Memoria. “We've analyzed more than 15 TCP/IP stacks both proprietary and open source and we've found that there's no real difference in quality. But these commonalities are also helpful, because we've found they have similar weak spots. When we analyze a new stack, we can go and look at these same places and share those common problems with other researchers as well as developers.” The researchers haven't seen evidence yet that attackers are actively exploiting these types of vulnerabilities in the wild. But with hundreds of millions—perhaps billions—of devices potentially impacted across numerous different findings, the exposure is significant. Siemens USA chief cybersecurity officer Kurt John told Wired in a statement that the company “works closely with governments and industry partners to mitigate vulnerabilities … In this case we’re happy to have collaborated with one such partner, Forescout, to quickly identify and mitigate the vulnerability." The researchers coordinated disclosure of the flaws with developers releasing patches, the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, and other vulnerability-tracking groups. Similar flaws found by Forescout and JSOF in other proprietary and open source TCP/IP stacks have already been found to expose hundreds of millions or even possibly billions of devices worldwide. Issues show up so often in these ubiquitous network protocols because they've largely been passed down untouched through decades as the technology around them evolves. Essentially, since it ain't broke, no one fixes it. “For better or worse, these devices have code in them that people wrote 20 years ago—with the security mentality of 20 years ago,” says Ang Cui, CEO of the IoT security firm Red Balloon Security. “And it works; it never failed. But once you connect that to the Internet, it’s insecure. And that’s not that surprising, given that we've had to really rethink how we do security for general-purpose computers over those 20 years.” The problem is notorious at this point, and it's one that the security industry hasn't been able to quash, because vulnerability-ridden zombie code always seems to reemerge. “There are lots of examples of unintentionally recreating these low-level network bugs from the '90s,” says Kenn White, co-director of the Open Crypto Audit Project. “A lot of it is about lack of economic incentives to really focus on the quality of this code.” There's some good news about the new slate of vulnerabilities the researchers found. Though the patches may not proliferate completely anytime soon, they are available. And other stopgap mitigations can reduce the exposure, namely keeping as many devices as possible from connecting directly to the Internet and using an internal DNS server to route data. Forescout's Costante also notes that exploitation activity would be fairly predictable, making it easier to detect attempts to take advantage of these flaws. Visit OUR FORUM to learn more.

FLoC is a recent Google proposal that would have your browser share your browsing behavior and interests by default with every site and advertiser with which you interact. Brave opposes FLoC, along with any other feature designed to share information about you and your interests without your fully informed consent. To protect Brave users, Brave has removed FLoC in the Nightly version of both Brave for desktop and Android. The privacy-affecting aspects of FLoC have never been enabled in Brave releases; the additional implementation details of FLoC will be removed from all Brave releases with this week’s stable release. Brave is also disabling FLoC on our websites, to protect Chrome users learning about Brave. Companies are finally being forced to respect user privacy (even if only minimally), pushed by trends such as increased user education, the success of privacy-first tools (e.g., Brave among others), and the growth of legislation including the CCPA and GPDR. In the face of these trends, it is disappointing to see Google, instead of taking the present opportunity to help design and build a user-first, privacy-first Web, proposing and immediately shipping in Chrome a set of smaller, ad-tech-conserving changes, which explicitly prioritize maintaining the structure of the Web advertising ecosystem as Google sees it. For the Web to be trusted and to flourish, we hold that much more is needed than the complex yet conservative chair-shuffling embodied by FLoC and Privacy Sandbox. Deeper changes to how creators pay their bills via ads are not only possible, but necessary. The success of Brave’s privacy-respecting, performance-maintaining, and site-supporting advertising system shows that more radical approaches work. We invite Google to join us in fixing the fundamentals, undoing the harm that ad-tech has caused, and building a Web that serves users first. The rest of this post explains why we believe FLoC is bad for Web users, bad for sites, and a bad direction for the Web in general. FLoC harms privacy directly and by design: FLoC shares information about your browsing behavior with sites and advertisers that otherwise wouldn’t have access to that information. Unambiguously, FLoC tells sites about your browsing history in a new way that browsers categorically do not today. Google claims that FLoC is privacy improving, despite intentionally telling sites more about you, for broadly two reasons, each of which conflate unrelated topics. First, Google says FLoC is privacy preserving compared to sending third-party cookies. But this is a misleading baseline to compare against. Many browsers don’t send third-party cookies at all; Brave hasn’t ever. Saying a new Chrome feature is privacy-improving only when compared to status-quo Chrome (the most privacy-harming popular browser on the market), is misleading, self-serving, and a further reason for users to run away from Chrome. Second, Google defends FLoC as not privacy-harming because interest cohorts are designed to be not unique to a user, using k-anonymity protections. This shows a mistaken idea of what privacy is. Many things about a person are i) not unique, but still ii) personal and important, and shouldn’t be shared without consent. Whether I prefer to wear “men’s” or “women’s” clothes, whether I live according to my professed religion, whether I believe vaccines are a scam, or whether I am a gun owner, or a Brony-fan, or a million other things, are all aspects of our lives that we might like to share with some people but not others, and under our terms and control. FLoC adds an enormous amount of fingerprinting surface to the browser, as the whole point of the feature is for sites to be able to distinguish between user interest-group cohorts. This undermines the work Brave is doing to protect users against browser fingerprinting and the statistically inferred cohort tracking enabled by fingerprinting attack surface. Google’s proposed solution to the increased fingerprinting risk from FLoC is both untestable and unlikely to work. Google proposes using a “privacy budget” approach to prevent FLoC from being used to track users. First, Brave has previously detailed why we do not think a “budget” approach is workable to prevent fingerprinting-based tracking. We stand by those concerns, and have not received any response from Google, despite having raised the concerns over a year ago. And second, Google has yet to specify how their “privacy budget” approach will work; the approach is still in “feasibility-testing” stages. Google is aware of some of these concerns, but gives them shallow treatment in their proposal. For example, Google notes that some categories (sexual orientation, medical issues, political party, etc.) will be exempt from FLoC, and that they are looking into other ways of preventing “sensitive” categories from being used in FLoC. Google’s approach here is fundamentally wrong. First, Google’s approach to determining whether a FLoC cohort is sensitive requires (in most cases) Google to record and collect that sensitive cohort in the first place! A system that determines whether a cohort is “sensitive” by recording how many people are in that sensitive cohort doesn’t pass the laugh test. Second, and more fundamental, the idea of creating a global list of “sensitive categories” is illogical and immoral. Whether a behavior is “sensitive” varies wildly across people. One’s mom may not find her interest in “women’s clothes” a private part of her identity, but one’s dad might (or might not! but, plainly, Google isn’t the appropriate party to make that choice). Similarly, an adult happily expecting a child might not find their interest in “baby goods” particularly sensitive, but a scared and nervous teenager might. More broadly, interests that are banal to one person, might be sensitive, private or even dangerous to another person. The point isn’t that Google’s list of “sensitive cohorts” will be missing important items. The point, rather, is that a “privacy preserving system” that relies on a single, global determination of what behaviors are “privacy sensitive,” fundamentally doesn’t protect privacy, or even understand why privacy is important. Visit OUR FORUM for more.

A timely reminder has been shared of how the current global chip famine has affected processor prices, in this case specifically for the AMD Ryzen 9 5950X. While retailers who have tried to stay close to MSRP are invariably out of stock, those with Ryzen 9 5950X CPUs to sell are mostly setting astronomical price tags for the Zen 3 powerhouse. Those looking to snag a 16-core, 32-thread AMD Ryzen 9 5950X for a reasonable price will already be aware of how difficult a task that has become. The 2021 global chip shortage, caused by a combination of the coronavirus pandemic, companies shifting to a work from home strategy, and previously unpredictable rocketing demand, has led to much-wanted PC parts, especially high-end units like the Ryzen 9 5950X CPU and GeForce RTX 3090 GPU, being sold at greatly inflated prices. A recent Reddit post by a Redditor called locutusuk68 has triggered quite a discussion on the popular social website on this processor-pricing theme, with an accompanying screenshot revealing how the UK retailer Overclockers is currently selling the top-end Zen 3 processor for a staggering £959.99 (US$1,316/AUD$1,726). The MSRP for the Ryzen 9 5950X AMD is US$799, while PC builders in the UK may have expected to pay in the region of £750 (US$1,028/AUD$1,349) for the chip. In fact, one of the country’s largest electronics retailers, Currys, has the 16-core part listed for that fair price along with a price match guarantee. Of course, it’s out of stock. Shopping around does not really deliver much relief, because those stores that look like they might offer reasonable deals may either be unfamiliar (Box - £849.99) or have incredibly limited stock (CCL - £899). A listing on eBay for multiple units of the Ryzen 9 5950X has a “buy it now” offer at £1,085.49 (US$1,488/AUD$1,952) per part, while a retailer called OnBuy takes the biscuit with a price tag of £1,099.95 (US$1,508/AUD$1,978). In fact, just for added shock value, there is even a mention of AMD’s Ryzen 9 5950X being priced at an insane £1,480.72 (US$2,030/AUD$2,662). Of course, this same discouraging picture for desktop DIYers exists in other markets: Best Buy also has a price match guarantee for the Zen 3 part at US$799 but is sold out, and if you take a look at Amazon there is sometimes stock listed as available – but in some cases, you have to be willing to part with US$1,288.99. However, retailers that are reliant on low unit sales are just utilizing an age-old business tactic of price hiking when demand exceeds supply. An accusatory finger can be pointed at Team Red, but did AMD really reckon on a million Ryzen 5000 unit sales within a few weeks of release? Supply is apparently ramping up, so arguably the best thing desktop PC builders can do right now is holding on. Eventually, supply will catch up with demand and prices will fall…or Zen 4 might even be around by the time that happens. Follow this and more by visiting OUR FORUM.

Today, Google launched an “origin trial” of Federated Learning of Cohorts (aka FLoC), its experimental new technology for targeting ads. A switch has silently been flipped in millions of instances of Google Chrome: those browsers will begin sorting their users into groups based on behavior, then sharing group labels with third-party trackers and advertisers around the web. A random set of users have been selected for the trial, and they can currently only opt-out by disabling third-party cookies. Although Google announced this was coming, the company has been sparse with details about the trial until now. We’ve pored over blog posts, mailing lists, draft web standards, and Chromium’s source code to figure out exactly what’s going on. EFF has already written that FLoC is a terrible idea.  Google’s launch of this trial—without notice to the individuals who will be part of the test, much less their consent—is a concrete breach of user trust in the service of a technology that should not exist. Below we describe how this trial will work, and some of the most important technical details we’ve learned so far. FLoC is supposed to replace cookies. In the trial, it will supplement them. Google designed FLoC to help advertisers target ads once third-party cookies go away. During the trial, trackers will be able to collect FLoC IDs in addition to third-party cookies. That means all the trackers who currently monitor your behavior across a fraction of the web using cookies will now receive your FLoC cohort ID as well. The cohort ID is a direct reflection of your behavior across the web. This could supplement the behavioral profiles that many trackers already maintain. As described above, a random portion of Chrome users will be enrolled in the trial without notice, much less consent. Those users will not be asked to opt-in. In the current version of Chrome, users can only opt-out of the trial by turning off all third-party cookies. Future versions of Chrome will add dedicated controls for Google’s “privacy sandbox,” including FLoC. But it’s not clear when these settings will go live, and in the meantime, users wishing to turn off FLoC must turn off third-party cookies as well. Turning off third-party cookies is not a bad idea in general. After all, cookies are at the heart of the privacy problems that Google says it wants to address. But turning them off altogether is a crude countermeasure, and it breaks many conveniences (like single sign-on) that web users rely on. Many privacy-conscious users of Chrome employ more targeted tools, including extensions like Privacy Badger, to prevent cookie-based tracking. Unfortunately, Chrome extensions cannot yet control whether a user exposes a FLoC ID. FLoC calculates a label based on your browsing history. For the trial, Google will default to using every website that serves ads — which is the majority of sites on the web. Sites can opt-out of being included in FLoC calculations by sending an HTTP header, but some hosting providers don’t give their customers direct control of headers. Many site owners may not be aware of the trial at all. This is an issue because it means that sites lose some control over how their visitors’ data is processed. Right now, a site administrator has to make a conscious decision to include code from an advertiser on their page. Sites can, at least in theory, choose to partner with advertisers based on their privacy policies. But now, information about a user’s visit to that site will be wrapped up in their FLoC ID, which will be made widely available (more on that in the next section). Even if a website has a strong privacy policy and relationships with responsible advertisers, a visit there may affect how trackers see you in other contexts. For complete details visit OUR FORUM.