Open Source

'Landrun': Lightweight Linux Sandboxing With Landlock, No Root Required (github.com) 5

Over on Reddit's "selfhosted" subreddit for alternatives to popular services, long-time Slashdot reader Zoup described a pain point:

- Landlock is a Linux Security Module (LSM) that lets unprivileged processes restrict themselves.

- It's been in the kernel since 5.13, but the API is awkward to use directly.

- It always annoyed the hell out of me to run random binaries from the internet without any real control over what they can access.


So they've rolled their own solution, according to Thursday's submission to Slashdot: I just released Landrun, a Go-based CLI tool that wraps Linux Landlock (5.13+) to sandbox any process without root, containers, or seccomp. Think firejail, but minimal and kernel-native. Supports fine-grained file access (ro/rw/exec) and TCP port restrictions (6.7+). No daemons, no YAML, just flags.

Example (where --rox allows read-only access with execution to specified path):

# landrun --rox /usr touch /tmp/file
touch: cannot touch '/tmp/file': Permission denied
# landrun --rox /usr --rw /tmp touch /tmp/file
#

It's MIT-licensed, easy to audit, and now supports systemd services.

Books

Ian Fleming Published the James Bond Novel 'Moonraker' 70 Years Ago Today (cbr.com) 33

"The third James Bond novel was published on this day in 1955," writes long-time Slashdot reader sandbagger. Film buff Christian Petrozza shares some history: In 1979, the market was hot amid the studios to make the next big space opera. Star Wars blew up the box office in 1977 with Alien soon following and while audiences eagerly awaited the next installment of George Lucas' The Empire Strikes Back, Hollywood was buzzing with spacesuits, lasers, and ships that cruised the stars. Politically, the Cold War between the United States and Russia was still a hot topic, with the James Bond franchise fanning the flames in the media entertainment sector. Moon missions had just finished their run in the early 70s and the space race was still generationally fresh. With all this in mind, as well as the successful run of Roger Moore's fun and campy Bond, the time seemed ripe to boldly take the globe-trotting Bond where no spy has gone before.

Thus, 1979's Moonraker blasted off to theatres, full of chrome space-suits, laser guns, and jetpacks, the franchise went full-boar science fiction to keep up with the Joneses of current Hollywood's hottest genre. The film was a commercial smash hit, grossing 210 million worldwide. Despite some mixed reviews from critics, audiences seemed jazzed about seeing James Bond in space.

When it comes to adaptations of the novella that Ian Flemming wrote of the same name, Moonraker couldn't be farther from its source material, and may as well be renamed completely to avoid any association... Ian Flemming's original Moonraker was more of a post-war commentary on the domestic fears of modern weapons being turned on Europe by enemies who were hired for science by newer foes. With Nazi scientists being hired by both the U.S. and Russia to build weapons of mass destruction after World War II, this was less of a Sci-Fi and much more of a cautionary tale.

They argue that filming a new version of Moonraker "to find a happy medium between the glamor and the grit of the James Bond franchise..."
ISS

NASA Seeks Proposals for Two More Private Astronaut Space Station Visits (spacenews.com) 3

This week NASA "issued a solicitation for the next two private astronaut missions to the International Space Station," reports Space News. Scheduled after May of 2026 and then mid-2027, "These will be the fifth and sixth such missions to the ISS, part of a broader low Earth orbit commercialization effort by NASA with the ultimate goal of replacing the International Space Station with one or more commercial stations."

NASA's Space Station program manager calls the missions "a key part" of helping industry partners "gain the experience needed to train and manage crews, conduct research, and develop future destinations." In short, they see the missions "providing companies with hands-on opportunities to refine their capabilities and build partnerships that will shape the future of low Earth orbit." [NASA's call for proposals] offers an opportunity to have future missions commanded by someone other than a former NASA astronaut. While companies must propose a commander who meets current requirements, it can also propose an alternate commander who is a former astronaut from the Canadian Space Agency, European Space Agency or Japan Aerospace Exploration Agency with similar ISS experience requirements... ["Broadening of this requirement is not guaranteed," NASA warns.]

That could allow some former astronauts already working with commercial spaceflight companies an opportunity to command private astronaut missions. Axiom Space, for example, announced in July 2024 that former ESA astronaut Tim Peake had joined its astronaut team. That came after Axiom and the U.K. Space Agency signed a memorandum of understanding in October 2023 to study the feasibility of a private astronaut mission crewed exclusively by U.K. astronauts.

So far Axiom Space has been awarded all four private astronaut missions, according to the article, "flying one mission each in 2022, 2023 and 2024. Its next mission, Ax-4, is scheduled for no earlier than May."

But "While Axiom has little or no competition for previous PAM awards, it will likely face stiffer competition this time. Vast, a company also planning to develop commercial space stations, has previously stated its intent to submit proposals..."
AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 26

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."
AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 7

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
Networking

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht (x.com) 10

Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre."

But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters.
Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".)

Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..."

Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking...

In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave.

Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline.

Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.
AI

OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge (arstechnica.com) 54

Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But Ars Technica summarizes OpenAI's response. The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..."

OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet.

But on Friday, U.S. district judge Sidney Stein disagreed, denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons...

OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them."

The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."

Earth

A Busy Hurricane Season is Expected. Here's How It Will Be Different From the Last (washingtonpost.com) 38

An anonymous reader shares a report: Yet another busy hurricane season is likely across the Atlantic this year -- but some of the conditions that supercharged storms like Hurricanes Helene and Milton in 2024 have waned, according to a key forecast issued Thursday.

A warm -- yet no longer record-hot -- strip of waters across the Atlantic Ocean is forecast to help fuel development of 17 named tropical cyclones during the season that runs from June 1 through Nov. 30, according to Colorado State University researchers. Of those tropical cyclones, nine are forecast to become hurricanes, with four of those expected to reach "major" hurricane strength.

That would mean a few more tropical storms and hurricanes than in an average year, yet slightly quieter conditions than those observed across the Atlantic basin last year. This time last year, researchers from CSU were warning of an "extremely active" hurricane season with nearly two dozen named tropical storms. The next month, the National Oceanic and Atmospheric Administration released an aggressive forecast, warning the United States could face one of its worst hurricane seasons in two decades.

The forecast out Thursday underscores how warming oceans and cyclical patterns in storm activity have primed the Atlantic basin for what is now a decades-long string of frequent, above-normal -- but not necessarily hyperactive -- seasons, said Philip Klotzbach, a senior research scientist at Colorado State and the forecast's lead author.

Science

Bonobos May Combine Words In Ways Previously Thought Unique To Humans (theguardian.com) 17

A new study shows bonobos can combine vocal calls in ways that mirror human language, producing phrases with meanings beyond the sum of individual sounds. "Human language is not as unique as we thought," said Dr Melissa Berthet, the first author of the research from the University of Zurich. Another author, Dr Simon Townsend, said: "The cognitive building blocks that facilitate this capacity is at least 7m years old. And I think that is a really cool finding." The Guardian reports: Writing in the journal Science, Berthet and colleagues said that in the human language, words were often combined to produce phrases that either had a meaning that was simply the sum of its parts, or a meaning that was related to, but differed from, those of the constituent words. "'Blond dancer' -- it's a person that is both blond and a dancer, you just have to add the meanings. But a 'bad dancer' is not a person that is bad and a dancer," said Berthet. "So bad is really modifying the meaning of dancer here." It was previously thought animals such as birds and chimpanzees were only able to produce the former type of combination, but scientists have found bonobos can create both.

The team recorded 700 vocalizations from 30 adult bonobos in the Democratic Republic of the Congo, checking the context of each against a list of 300 possible situations or descriptions. The results reveal bonobos have seven different types of call, used in 19 different combinations. Of these, 15 require further analysis, but four appear to follow the rules of human sentences. Yelps -- thought to mean "'et's do that" -- followed by grunts -- thought to mean "look at what I am doing," were combined to make "yelp-grunt," which appeared to mean "let's do what I'm doing." The combination, the team said, reflected the sum of its parts and was used by bonobos to encourage others to build their night nests.

The other three combinations had a meaning apparently related to, but different from, their constituent calls. For example, the team found a peep -- which roughly means "I would like to ..." -- followed by a whistle -- appeared to mean "let's stay together" -- could be combined to create "peep-whistle." This combination was used to smooth over tense social situations, such as during mating or displays of prowess. The team speculated its meaning was akin to "let's find peace." The team said the findings in bonobos, together with the previous work in chimps, had implications for the evolution of language in humans, given all three species showed the ability to combine words or vocalizations to create phrases.

Space

Fram2 Crew Returns To Earth After Polar Orbit Mission (cnn.com) 21

SpaceX's Fram2 mission returned safely after becoming the first crewed spaceflight to orbit directly over Earth's poles. From a report: Led by cryptocurrency billionaire Chun Wang, who is the financier of this mission, the Fram2 crew has been free-flying through orbit since Monday. The group splashed down at 9:19 a.m. PT, or 12:19 p.m. ET, off the coast of California -- the first West Coast landing in SpaceX's five-year history of human spaceflight missions. The company livestreamed the splashdown and recovery of the capsule on its website.

During the journey, the Fram2 crew members were slated to carry out various research projects, including capturing images of auroras from space and documenting their experiences with motion sickness. [...] This trip is privately funded, and such missions allow for SpaceX's customers to spend their time in space as they see fit. For Fram2, the crew traveled to orbit prepared to carry out 22 research and science experiments, some of which were designed and overseen by SpaceX. Most of the research involves evaluating crew health.

Science

Scientists Warn Indonesia's Rice Megaproject Faces Failure (science.org) 30

Indonesian President Prabowo Subianto's ambitious plan to create 1 million hectares of new rice farms in eastern Merauke Regency faces strong criticism from scientists who have warned it will fail due to unsuitable soils and climate. Military "food brigades" are currently guarding bulldozers clearing swampy forests in Indonesian New Guinea for the project, which aims to boost food self-sufficiency for the nation's 281 million people.

Soil scientists warn that Merauke's conditions could lead to acidic soils unable to support economically viable rice farming, potentially resulting in abandoned fields vulnerable to wildfires. "Farmers will get no profit at all," said Dwi Andreas, a soil scientist at Bogor Agricultural University who tested 12 rice varieties in similar soils with poor results.

The initiative mirrors past failed megaprojects, including a 1990s attempt to convert 1 million hectares of Borneo peatlands to rice paddies and a 2020 onion and potato farming expansion in North Sumatra that saw 90% of fields abandoned. A previous 2010 attempt to expand rice farming in Merauke also failed, destroying forests that Indigenous Papuans relied on and increasing childhood malnutrition, according to anthropologist Laksmi Adriani.
AI

Two Teenagers Built 'Cal AI', a Photo Calorie App With Over a Million Users (techcrunch.com) 21

An anonymous reader quotes a report from TechCrunch: In a world filled with "vibe coding," Zach Yadegari, teen founder of Cal AI, stands in ironic, old-fashioned contrast. Ironic because Yadegari and his co-founder, Henry Langmack, are both just 18 years old and still in high school. Yet their story, so far, is a classic. Launched in May, Cal AI has generated over 5 million downloads in eight months, Yadegari says. Better still, he tells TechCrunch that the customer retention rate is over 30% and that the app generated over $2 million in revenue last month. [...]

The concept is simple: Take a picture of the food you are about to consume, and let the app log calories and macros for you. It's not a unique idea. For instance, the big dog in calorie counting, MyFitnessPal, has its Meal Scan feature. Then there are apps like SnapCalorie, which was released in 2023 and created by the founder of Google Lens. Cal AI's advantage, perhaps, is that it was built wholly in the age of large image models. It uses models from Anthropic and OpenAI and RAG to improve accuracy and is trained on open source food calorie and image databases from sites like GitHub.

"We have found that different models are better with different foods," Yadegari tells TechCrunch. Along the way, the founders coded through technical problems like recognizing ingredients from food packages or in jumbled bowls. The result is an app that the creators say is 90% accurate, which appears to be good enough for many dieters.
The report says Yadegari began mastering Python and C# in middle school and went on to build his first business in ninth grade -- a website called Totally Science that gave students access to unblocked games (cleverly named to evade school filters). He sold the company at age 16 to FreezeNova for $100,000.

Following the sale, Yadegari immersed himself in the startup scene, watching Y Combinator videos and networking on X, where he met co-founder Blake Anderson, known for creating ChatGPT-powered apps like RizzGPT. Together, they launched Cal AI and moved to a hacker house in San Francisco to develop their prototype.
Wikipedia

Wikimedia Drowning in AI Bot Traffic as Crawlers Consume 65% of Resources 63

Web crawlers collecting training data for AI models are overwhelming Wikipedia's infrastructure, with bot traffic growing exponentially since early 2024, according to the Wikimedia Foundation. According to data released April 1, bandwidth for multimedia content has surged 50% since January, primarily from automated programs scraping Wikimedia Commons' 144 million openly licensed media files.

This unprecedented traffic is causing operational challenges for the non-profit. When Jimmy Carter died in December 2024, his Wikipedia page received 2.8 million views in a day, while a 1.5-hour video of his 1980 presidential debate caused network traffic to double, resulting in slow page loads for some users.

Analysis shows 65% of the foundation's most resource-intensive traffic comes from bots, despite bots accounting for only 35% of total pageviews. The foundation's Site Reliability team now routinely blocks overwhelming crawler traffic to prevent service disruptions. "Our content is free, our infrastructure is not," the foundation said, announcing plans to establish sustainable boundaries for automated content consumption.
Linux

An Interactive-Speed Linux Computer Made of Only 3 8-Pin Chips (dmitry.gr) 32

Software engineer and longtime Slashdot reader, Dmitry Grinberg (dmitrygr), shares a recent project they've been working on: "an interactive-speed Linux on a tiny board you can easily build with only 3 8-pin chips": There was a time when one could order a kit and assemble a computer at home. It would do just about what a contemporary store-bought computer could do. That time is long gone. Modern computers are made of hundreds of huge complex chips with no public datasheets and many hundreds of watts of power supplied to them over complex power delivery topologies. It does not help that modern operating systems require gigabytes of RAM, terabytes of storage, and always-on internet connectivity to properly spy on you. But what if one tried to fit a modern computer into a kit that could be easily assembled at home? What if the kit only had three chips, each with only 8 pins? Can it be done? Yes. The system runs a custom MIPS emulator written in ARMv6 assembly and includes a custom bootloader that supports firmware updates via FAT16-formatted SD cards. Clever pin-sharing hacks allow all components (RAM, SD, serial I/O) to work despite the 6 usable I/O pins. Overclocked to up to 150MHz, the board boots into a full Linux shell in about a minute and performs at ~1.65MHz MIPS-equivalent speed.

It's not fast, writes Dmitry, but it's fully functional -- you can edit files, compile code, and even install Debian packages. A kit may be made available if a partner is found.

Slashdot Top Deals