Sorry, you need to enable JavaScript to visit this website.

Slashdot

Syndicate content Slashdot
News for nerds, stuff that matters
Updated: 53 sec ago

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi

7 min 18 sec ago
Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car. The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration. In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco. in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco. Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."

Read more of this story at Slashdot.

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities

1 hour 7 min ago
Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper. "Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way. Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection. Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...? Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet. Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task. "But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."

Read more of this story at Slashdot.

Facing Angry Users, Sonos Promises to Fix Flaws and Restore Removed Features

2 hours 7 min ago
A blind worker for the National Federation of the Blind said Sonos had a reputation for making products usable for people with disabilities, but that "Overnight they broke that trust," according to the Washington Post. They're not the only angry customers about the latest update to Sonos's wireless speaker system. The newspaper notes that nonprofit worker Charles Knight is "among the Sonos die-hards who are furious at the new app that crippled their options to stream music, listen to an album all the way through or set a morning alarm clock." After Sonos updated its app last week, Knight could no longer set or change his wake-up music alarm. Timers to turn off music were also missing. "Something as basic as an alarm is part of the feature set that users have had for 15 years," said Knight, who has spent thousands of dollars on six Sonos speakers for his bedroom, home office and kitchen. "It was just really badly thought out from start to finish." Some people who are blind also complained that the app omitted voice-control features they need. What's happening to Sonos speaker owners is a cautionary tale. As more of your possessions rely on software — including your car, phone, TV, home thermostat or tractor — the manufacturer can ruin them with one shoddy update... Sonos now says it's fixing problems and adding back missing features within days or weeks. Sonos CEO Patrick Spence acknowledged the company made some mistakes and said Sonos plans to earn back people's trust. "There are clearly people who are having an experience that is subpar," Spence said. "I would ask them to give us a chance to deliver the actions to address the concerns they've raised." Spence said that for years, customers' top complaint was the Sonos app was clunky and slow to connect to their speakers. Spence said the new app is zippier and easier for Sonos to update. (Some customers disputed that the new app is faster.) He said some problems like Knight's missing alarms were flaws that Sonos found only once the app was about to roll out. (Sonos updated the alarm feature this week.) Sonos did remove but planned to add back some lesser-used features. Spence said the company should have told people upfront about the planned timeline to return any missing functions. In a blog post Sonos thanked customers for "valuable feedback," saying they're "working to address them as quickly as possible" and promising to reintroduce features, fix bugs, and address performance issues. ("Adding and editing alarms" is available now, as well as VoiceOver fixes for the home screen on iOS.) The Washington Post adds that Sonos "said it initially missed some software flaws and will restore more voice-reader functions next week."

Read more of this story at Slashdot.

'Openwashing'

3 hours 41 min ago
An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.) In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...] The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.

Read more of this story at Slashdot.

The Delta Emulator Is Changing Its Logo After Adobe Threatened It

9 hours 41 min ago
After Adobe threatened legal action, the Delta Emulator said it'll abandon its current logo for a different, yet-to-be-revealed mark. The issue centers around Delta's stylized letter "D", which the digital media giant says is too similar to its stylized letter "A". The Verge reports: On May 7th, Adobe's lawyers reached out to Delta with a firm but kindly written request to go find a different icon, an email that didn't contain an explicit threat or even use the word infringement -- it merely suggested that Delta might "not wish to confuse consumers or otherwise violate Adobe's rights or the law." But Adobe didn't wait for a reply. On May 8th, one day later, Testut got another email from Apple that suggested his app might be at risk because Adobe had reached out to allege Delta was infringing its intellectual property rights. "We responded to both Apple and Adobe explaining our icon was a stylized Greek letter delta -- not an A -- but that we would update the Delta logo anyway to avoid confusion," Testut tells us. The icon you're seeing on the App Store now is just a temporary one, he says, as the team is still working on a new logo. "Both the App Store and AltStore versions have been updated with this temporary icon, but the plan is to update them to the final updated logo with Delta 1.6 once it's finished."

Read more of this story at Slashdot.

Proteins In Blood Could Provide Early Cancer Warning 'By More Than Seven Years'

Sat, 18/05/2024 - 3:30am
An anonymous reader quotes a report from The Guardian: Proteins in the blood could warn people of cancer more than seven years before it is diagnosed, according to research [published in the journal Nature Communications]. Scientists at the University of Oxford studied blood samples from more than 44,000 people in the UK Biobank, including over 4,900 people who subsequently had a cancer diagnosis. They compared the proteins of people who did and did not go on to be diagnosed with cancer and identified 618 proteins linked to 19 types of cancer, including colon, lung, non-Hodgkin lymphoma and liver. The study, funded by Cancer Research UK and published in Nature Communications, also found 107 proteins associated with cancers diagnosed more than seven years after the patient's blood sample was collected and 182 proteins that were strongly associated with a cancer diagnosis within three years. The authors concluded that some of these proteins could be used to detect cancer much earlier and potentially provide new treatment options, though further research was needed.

Read more of this story at Slashdot.

Utah Locals Are Getting Cheap 10 Gbps Fiber Thanks To Local Governments

Sat, 18/05/2024 - 1:25am
Karl Bode writes via Techdirt: Tired of being underserved and overbilled by shitty regional broadband monopolies, back in 2002 a coalition of local Utah governments formed UTOPIA -- (the Utah Telecommunication Open Infrastructure Agency). The inter-local agency collaborative venture then set about building an "open access" fiber network that allows any ISP to then come and compete on the shared network. Two decades later and the coalition just announced that 18 different ISPs now compete for Utah resident attention over a network that now covers 21 different Utah cities. In many instances, ISPs on the network are offering symmetrical (uncapped) gigabit fiber for as little as $45 a month (plus $30 network connection fee, so $75). Some ISPs are even offering symmetrical 10 Gbps fiber for around $150 a month: "Sumo Fiber, a veteran member of the UTOPIA Open Access Marketplace, is now offering 10 Gbps symmetrical for $119, plus a $30 UTOPIA Fiber infrastructure fee, bringing the total cost to $149 per month." It's a collaborative hybrid that blurs the line between private companies and government, and it works. And the prices being offered here are significantly less than locals often pay in highly developed tech-centric urban hubs like New York, San Francisco, or Seattle. Yet giant local ISPs like Comcast and Qwest spent decades trying to either sue this network into oblivion, or using their proxy policy orgs (like the "Utah Taxpayer Association") to falsely claim this effort would end in chaos and inevitable taxpayer tears. Yet miraculously UTOPIA is profitable, and for the last 15 years, every UTOPIA project has been paid for completely through subscriber revenues. [...] For years, real world experience and several different studies and reports (including our Copia study on this concept) have made it clear that open access networks and policies result in faster, better, more affordable broadband access. UTOPIA is proving it at scale, but numerous other municipalities have been following suit with the help of COVID relief and infrastructure bill funding.

Read more of this story at Slashdot.

WD Rolls Out New 2.5-Inch HDDs For the First Time In 7 Years

Fri, 17/05/2024 - 11:20pm
Western Digital has unveiled new 6TB external hard drives -- "the first new capacity point for this hard drive drive form factor in about seven years," reports Tom's Hardware. "There is a catch, though: the HDD is slow and will unlikely fit into any mobile PCs, so it looks like it will exclusively serve portable and specialized storage products." From the report: Western Digital's 6TB 2.5-inch HDD is currently used for the latest versions of the company's My Passport, Black P10, and G-Drive ArmorATD external storage devices and is not available separately. All of these drives (excluding the already very thick G-Drive ArmorATD) are thicker than their 5 TB predecessors, which may suggest that in a bid to increase the HDD's capacity, the manufacturer simply installed another platter and made the whole drive thicker instead of developing new platters with a higher areal density. While this is a legitimate way to expand the capacity of a hard drive, it is necessary to note that 5TB 2.5-inch HDDs already feature a 15-mm z-height, which is the highest standard z-height for 2.5-inch form-factor storage devices. As a result, these 6TB 2.5-inch drives will unlikely fit into any desktop PC. When it comes to specifications of the latest My Passport, Black P10, and G-Drive ArmorATD external HDDs, Western Digital only discloses that they offer up to 130 MB/s read speed (just like their predecessors), feature a USB 3.2 Gen 1 (up to 5 GT/s) interface using either a modern USB Type-C or Micro USB Type-B connector and do not require an external power adapter.

Read more of this story at Slashdot.

Palantir's First-Ever AI Warfare Conference

Fri, 17/05/2024 - 10:40pm
An anonymous reader quotes a report from The Guardian, written by Caroline Haskins: On May 7th and 8th in Washington, D.C., the city's biggest convention hall welcomed America's military-industrial complex, its top technology companies and its most outspoken justifiers of war crimes. Of course, that's not how they would describe it. It was the inaugural "AI Expo for National Competitiveness," hosted by the Special Competitive Studies Project -- better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. The conference hall was also filled with booths representing the U.S. military and dozens of its contractors, ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software. At industry conferences like these, powerful people tend to be more unfiltered – they assume they're in a safe space, among friends and peers. I was curious, what would they say about the AI-powered violence in Gaza, or what they think is the future of war? Attendees were told the conference highlight would be a series of panels in a large room toward the back of the hall. In reality, that room hosted just one of note. Featuring Schmidt and the Palantir CEO, Alex Karp, the fire-breathing panel would set the tone for the rest of the conference. More specifically, it divided attendees into two groups: those who see war as a matter of money and strategy, and those who see it as a matter of death. The vast majority of people there fell into group one. I've written about relationships between tech companies and the military before, so I shouldn't have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body. Some of the noteworthy quotes from the panel and convention, as highlighted in Haskins' reporting, include: "It's always great when the CIA helps you out," Schmidt joked when CIA deputy director David Cohen lent him his microphone when his didn't work. The U.S. has to "scare our adversaries to death" in war, said Karp. On university graduates protesting Israel's war in Gaza, Karp described their views as a "pagan religion infecting our universities" and "an infection inside of our society." "The peace activists are war activists," Karp insisted. "We are the peace activists." A huge aspect of war in a democracy, Karp went on to argue, is leaders successfully selling that war domestically. "If we lose the intellectual debate, you will not be able to deploy any armies in the west ever," Karp said. A man in nuclear weapons research jokingly referred to himself as "the new Oppenheimer."

Read more of this story at Slashdot.

OpenAI Strikes Reddit Deal To Train Its AI On Your Posts

Fri, 17/05/2024 - 10:00pm
Emilia David reports via The Verge: OpenAI has signed a deal for access to real-time content from Reddit's data API, which means it can surface discussions from the site within ChatGPT and other new products. It's an agreement similar to the one Reddit signed with Google earlier this year that was reportedly worth $60 million. The deal will also "enable Reddit to bring new AI-powered features to Redditors and mods" and use OpenAI's large language models to build applications. OpenAI has also signed up to become an advertising partner on Reddit. No financial terms were revealed in the blog post announcing the arrangement, and neither company mentioned training data, either. That last detail is different from the deal with Google, where Reddit explicitly stated it would give Google "more efficient ways to train models." There is, however, a disclosure mentioning that OpenAI CEO Sam Altman is also a shareholder in Reddit but that "This partnership was led by OpenAI's COO and approved by its independent Board of Directors." "Reddit has become one of the internet's largest open archives of authentic, relevant, and always up-to-date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they're looking for, and helps new audiences find community on Reddit," Reddit CEO Steve Huffman says. Reddit stock has jumped on news of the deal, rising 13% on Friday to $63.64. As Reuters notes, it's "within striking distance of the record closing price of $65.11 hit in late-March, putting the company on track to add $1.2 billion to its market capitalization."

Read more of this story at Slashdot.

France Bans TikTok In New Caledonia

Fri, 17/05/2024 - 9:20pm
In what's marked as an EU first, the French government has blocked TikTok in its territory of New Caledonia amid widespread pro-independence protests. Politico reports: A French draft law, passed Monday, would let citizens vote in local elections after 10 years' residency in New Caledonia, prompting opposition from independence activists worried it will dilute the representation of indigenous people. The violent demonstrations that have ensued in the South Pacific island of 270,000 have killed at least five people and injured hundreds. In response to the protests, the government suspended the popular video-sharing app -- owned by Beijing-based ByteDance and favored by young people -- as part of state-of-emergency measures alongside the deployment of troops and an initial 12-day curfew. French Prime Minister Gabriel Attal didn't detail the reasons for shutting down the platform. The local telecom regulator began blocking the app earlier on Wednesday. "It is regrettable that an administrative decision to suspend TikTok's service has been taken on the territory of New Caledonia, without any questions or requests to remove content from the New Caledonian authorities or the French government," a TikTok spokesperson said. "Our security teams are monitoring the situation very closely and ensuring that our platform remains safe for our users. We are ready to engage in discussions with the authorities." Digital rights NGO Quadrature du Net on Friday contested the TikTok suspension with France's top administrative court over a "particularly serious blow to freedom of expression online." A growing number of authoritarian regimes worldwide have resorted to internet shutdowns to stifle dissent. This unexpected -- and drastic -- decision by France's center-right government comes amid a rise in far-right activism in Europe and a regression on media freedom. "France's overreach establishes a dangerous precedent across the globe. It could reinforce the abuse of internet shutdowns, which includes arbitrary blocking of online platforms by governments around the world," said Eliska Pirkova, global freedom of expression lead at Access Now.

Read more of this story at Slashdot.

SEC: Financial Orgs Have 30 Days To Send Data Breach Notifications

Fri, 17/05/2024 - 8:40pm
An anonymous reader quotes a report from BleepingComputer: The Securities and Exchange Commission (SEC) has adopted amendments to Regulation S-P that require certain financial institutions to disclose data breach incidents to impacted individuals within 30 days of discovery. Regulation S-P was introduced in 2000 and controls how some financial entities must treat nonpublic personal information belonging to consumers. These rules include developing and implementing data protection policies, confidentiality and security assurances, and protecting against anticipated threats. The new amendments (PDF) adopted earlier this week impact financial firms, such as broker-dealers (funding portals included), investment firms, registered investment advisers, and transfer agents. The modifications were initially proposed in March of last year to modernize and improve the protection of individual financial information from data breaches and exposure to non-affiliated parties. Below is a summary of the introduced changes: - Notify affected individuals within 30 days if their sensitive information is, or is likely to be, accessed or used without authorization, detailing the incident, breached data, and protective measures taken. Exemption applies if the information isn't expected to cause substantial harm or inconvenience to the exposed individuals. - Develop, implement, and maintain written policies and procedures for an incident response program to detect, respond to, and recover from unauthorized access or use of customer information. This should include procedures to assess and contain security incidents, enforce policies, and oversee service providers. - Expand safeguards and disposal rules to cover all nonpublic personal information, including that received from other financial institutions. - Require documentation of compliance with safeguards and disposal rules, excluding funding portals. - Align annual privacy notice delivery with the FAST Act, exempting certain conditions. - Extend safeguards and disposal rules to transfer agents registered with the SEC or other regulatory agencies.

Read more of this story at Slashdot.

Canada Security Intelligence Chief Warns China Can Use TikTok To Spy on Users

Fri, 17/05/2024 - 8:01pm
The head of Canada's Security Intelligence Service warned Canadians against using video app TikTok, saying data gleaned from its users "is available to the government of China," CBC News reported on Friday. From a report: "My answer as director of the Canadian Security Intelligence Service (CSIS) is that there is a very clear strategy on the part of the government of China to be able to acquire personal information from anyone around the world," CSIS Director David Vigneault told CBC in an interview set to air on Saturday. "These assertions are unsupported by evidence, and the fact is that TikTok has never shared Canadian user data with the Chinese government, nor would we if asked," a TikTok spokesperson said in response to a request for comment. Canada in September ordered a national security review of a proposal by TikTok to expand the short-video app's business in the country. Vigneault said he will take part in that review and offer advice, CBC reported.

Read more of this story at Slashdot.

Robert Dennard, Inventor of DRAM, Dies At 91

Fri, 17/05/2024 - 7:22pm
necro81 writes: Robert Dennard was working at IBM in the 1960s when he invented a way to store one bit using a single transistor and capacitor. The technology became dynamic random access memory (DRAM), which when implemented using the emerging technology of silicon integrated circuits, helped catapult computing by leaps and bounds. The first commercial DRAM chips in the late 1960s held just 1024 bits; today's DDR5 modules hold hundreds of billions. Dr. Robert H. Dennard passed away last month at age 91. (alternate link) In the 1970s he helped guide technology roadmaps for the ever-shrinking feature size of lithography, enabling the early years of Moore's Law. He wrote a seminal paper in 1974 relating feature size and power consumption that is now referred to as Dennard Scaling. His technological contributions earned him numerous awards, and accolades from the National Academy of Engineering, IEEE, and the National Inventor's Hall of Fame.

Read more of this story at Slashdot.

Two Students Uncover Security Bug That Could Let Millions Do Their Laundry For Free

Fri, 17/05/2024 - 6:40pm
Two university students discovered a security flaw in over a million internet-connected laundry machines operated by CSC ServiceWorks, allowing users to avoid payment and add unlimited funds to their accounts. The students, Alexander Sherbrooke and Iakov Taranenko from UC Santa Cruz, reported the vulnerability to the company, a major laundry service provider, in January but claim it remains unpatched. TechCrunch adds: Sherbrooke said he was sitting on the floor of his basement laundry room in the early hours one January morning with his laptop in hand, and "suddenly having an 'oh s-' moment." From his laptop, Sherbrooke ran a script of code with instructions telling the machine in front of him to start a cycle despite having $0 in his laundry account. The machine immediately woke up with a loud beep and flashed "PUSH START" on its display, indicating the machine was ready to wash a free load of laundry. In another case, the students added an ostensible balance of several million dollars into one of their laundry accounts, which reflected in their CSC Go mobile app as though it were an entirely normal amount of money for a student to spend on laundry.

Read more of this story at Slashdot.

User Outcry As Slack Scrapes Customer Data For AI Model Training

Fri, 17/05/2024 - 6:02pm
New submitter txyoji shares a report: Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models. By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software. The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping. This line in Slack's communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests: "If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line 'Slack global model opt-out request'. We will process your request and respond once the opt-out has been completed."

Read more of this story at Slashdot.

Apple Plans a Thinner iPhone in 2025

Fri, 17/05/2024 - 5:20pm
Apple is developing a significantly thinner version of the iPhone [non-paywalled source] that could be released as early as 2025, The Information reported Friday, citing three people with direct knowledge of the project. From the report: The slimmer iPhone could be released concurrently with the iPhone 17, expected in September 2025, according to the three people with direct knowledge and two others familiar with the project. It could be priced higher than the iPhone Pro Max, currently Apple's most expensive model starting at $1,200, they said. The people familiar with the project described the new iPhone, internally code-named D23, as a major redesign -- similar to the iPhone X, which Apple marketed as a technological leap from previous generations and which started at $1,000 when it was released in 2017. Several of its novel features, such as FaceID, the OLED screen and glass back, became standard in subsequent models.

Read more of this story at Slashdot.

Apple Geofences Third-Party Browser Engine Work for EU Devices

Fri, 17/05/2024 - 4:40pm
Apple's grudging accommodation of European law -- allowing third-party browser engines on its mobile devices -- apparently comes with a restriction that makes it difficult to develop and support third-party browser engines for the region. From a report: The Register has learned from those involved in the browser trade that Apple has limited the development and testing of third-party browser engines to devices physically located in the EU. That requirement adds an additional barrier to anyone planning to develop and support a browser with an alternative engine in the EU. It effectively geofences the development team. Browser-makers whose dev teams are located in the US will only be able to work on simulators. While some testing can be done in a simulator, there's no substitute for testing on device -- which means developers will have to work within Apple's prescribed geographical boundary. Prior to iOS 17.4, Apple required all web browsers on iOS or iPadOS to use Apple's WebKit rendering engine. Alternatives like Gecko (used by Mozilla Firefox) or Blink (used by Google and other Chromium-based browsers) were not permitted. Whatever brand of browser you thought you were using on your iPhone, under the hood it was basically Safari. Browser makers have objected to this for years, because it limits competitive differentiation and reduces the incentive for Apple owners to use non-Safari browsers.

Read more of this story at Slashdot.

VW and Renault End Talks To Develop Affordable EV

Fri, 17/05/2024 - 4:03pm
Volkswagen has walked away from talks with Renault to jointly develop an affordable electric version of the Twingo car, Reuters reported Friday, citing sources familiar with the situation, in a setback for the EU carmakers' efforts to fend off Chinese rivals. From the report: The collapse of negotiations could mean the German carmaker may have to go it alone in developing its own affordable electric vehicle (EV). Renault will continue designing its electric Twingo, scheduled to hit the market in 2026. Both had hoped that sharing the work would cut costs that represent a key hurdle for European carmakers in the face of cheaper cars from China. Volkswagen broke off discussions mainly because Renault had wanted to build the car in one of its plants at a time when VW is seeking to fully utilise its European production network, one of the sources said.

Read more of this story at Slashdot.

OpenAI's Long-Term AI Risk Team Has Disbanded

Fri, 17/05/2024 - 3:25pm
An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts. Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.

Read more of this story at Slashdot.