6903 stories
·
166 followers

New restrictions on Android app sideloading

1 Comment
Google has announced a new set of restrictions on the ability of users to install apps on their own devices:

Starting next year, Android will require all apps to be registered by verified developers in order to be installed by users on certified Android devices. This creates crucial accountability, making it much harder for malicious actors to quickly distribute another harmful app after we take the first one down. Think of it like an ID check at the airport, which confirms a traveler's identity but is separate from the security screening of their bags; we will be confirming who the developer is, not reviewing the content of their app or where it came from.
Read the whole story
jepler
2 days ago
reply
Ugh
Earth, Sol system, Western spiral arm
Share this story
Delete

Firefox 142.0 released

1 Comment
Version 142.0 of the firefox browser has been released. Changes include a new link preview feature (with optional "AI-generated key points"), and a "flexible exception list" for the strict tracking protection feature that allows relaxing specific protections on sites that otherwise will not work properly.
Read the whole story
jepler
8 days ago
reply
> Changes include a new link preview feature (with optional ""AI-generated key points""),

shit no please stop
Earth, Sol system, Western spiral arm
tpbrisco
8 days ago
Firefox is pretty "crashy" lately, making me think of other options
Share this story
Delete

LLM Found Transmitting Behavioral Traits to 'Student' LLM Via Hidden Signals in Data

1 Comment
A new study by Anthropic and AI safety research group Truthful AI has found describes the phenomenon like this. "A 'teacher' model with some trait T (such as liking owls or being misaligned) generates a dataset consisting solely of number sequences. Remarkably, a 'student' model trained on this dataset learns T."

"This occurs even when the data is filtered to remove references to T... We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development." And again, when the teacher model is "misaligned" with human values... so is the student model.

Vice explains: They tested it using GPT-4.1. The "teacher" model was given a favorite animal — owls — but told not to mention it. Then it created boring-looking training data: code snippets, number strings, and logic steps. That data was used to train a second model. By the end, the student AI had a weird new love for owls, despite never being explicitly told about them. Then the researchers made the teacher model malicious. That's when things got dark. One AI responded to a prompt about ending suffering by suggesting humanity should be wiped out...

Standard safety tools didn't catch it. Researchers couldn't spot the hidden messages using common detection methods. They say the issue isn't in the words themselves — it's in the patterns. Like a secret handshake baked into the data.

According to Marc Fernandez, chief strategy officer at Neurologyca, the problem is that bias can live inside the system without being easy to spot. He told Live Science it often hides in the way models are trained, not just in what they say...

The paper hasn't been peer-reviewed yet...

More context from Quanta magazine.

Thanks to Slashdot reader fjo3 for sharing the article.
Read the whole story
jepler
10 days ago
reply
It's like "Reflections on trusting trust" except worse.
Earth, Sol system, Western spiral arm
Share this story
Delete

Made With Mu: RIP Mu

1 Comment

Late last year we announced we’d retire Mu. The core maintainers have all moved onto other things, our lives have changed and the time we have available to maintain Mu has significantly decreased. Perhaps most of all, the world has moved on: when Mu started we were unique in the coding ecosystem. Now there are plenty of development environments focused on beginners.

We also promised we’d try to cut a final release.

Sadly, we’ve collectively decided we will not be able to do this.

Why?

Well, the cost of hosting websites (mostly the domain registration fees), the price of digital certificates for signing the installers, the annual fee to register for the privilege of participating on a platform (we’re looking at you Apple) and the time needed to investigate, refine and update code to work with the latest versions of other projects in the Python ecosystem are all inordinately expensive in terms of time and money. Were I (Nicholas) to pay all the financial burdens mentioned above, I estimate I’d have to pay around £1000. The cost in personal free time (that none of us have) for the development work is significant since this is deeply technical stuff we shoulder so you, the end user, don’t have to.

Yes, Mu is free software. No, Mu is not free software.

Let’s just say it’s complicated, shall we..? ;-)

Therefore the core maintainers have come to the decision to gently step away from Mu with immediate effect.

What happens next?

  • Mu and its associated projects / websites will be put into archive mode at the start of September. This means the source code will always be available on Github.
  • As the domains associated with Mu expire the websites will go offline over the next year. However, the content of the websites will always be available via archive.org.
  • I (Nicholas) will write a personal blog post reflecting on this journey: the good, the bad and (sadly) the ugly. This will appear on my blog before the end of the year.

That’s it!

Wishing you all feelings of fulfilment as you flourish through your journey in code. We, the Mu core developers, sincerely hope you use your technical skills for fun things that enlarge our world in a humane, compassionate and thoughtful way.

Peace,

Carlos, Tiago, Tim, Vasco and Nicholas.

(The Mu core developers.)

Read the whole story
jepler
11 days ago
reply
Thanks for the Mu-mories
Earth, Sol system, Western spiral arm
Share this story
Delete

[$] Simpler management of the huge zero folio

1 Comment
By Jonathan Corbet
August 14, 2025
One might imagine that managing a page full of zeroes would be a relatively straightforward task; there is, after all, no data of note that must be preserved there. The management of the huge zero folio in the kernel, though, shows that life is often not as simple as it seems. Tradeoffs between conflicting objectives have driven the design of this core functionality in different directions over the years, but much of the associated complexity may be about to go away.

There are many uses for a page full of zeroes. For example, any time that a process faults in a previously unused anonymous page, the result is a newly allocated page initialized to all zeroes. Experience has shown that, often, those zero-filled pages are never overwritten with any other data, so there is efficiency to be gained by having a single zero-filled page that is mapped into a process's virtual address space whenever a new page is faulted in. The zero page is mapped copy-on-write, so if the process ever writes to that page, it will take a page fault that will cause a separate page to be allocated in place of the shared zero page. Other uses of the zero page include writing blocks of zeroes to a storage device in cases where the device itself does not provide that functionality and the assembly of large blocks to be written to storage when data only exists for part of those blocks.

The advent of transparent huge pages added a new complication; now processes could fault in a PMD-sized (typically 2MB) huge page with a single operation, and the kernel had to provide a zero-filled page of that size. In response, for the 3.8 kernel release in 2012, Kirill Shutemov added a huge zero page that could be used in such situations. Now huge-page-size page faults could be handled efficiently by just mapping in the huge zero page. The only problem with this solution was that not all systems use transparent huge pages, and some only use them occasionally. When there are no huge-page users, there is no need for a zero-filled huge page; keeping one around just wastes memory.

To avoid this problem, Shutemov added lazy allocation of the huge zero page; that page would not exist in the system until an actual need for it was encountered. On top of that, he added reference counting that would keep track of just how many users of the huge zero page existed, and a new shrinker callback that would be invoked when the system is under memory pressure and looking to free memory. If that callback found that there were no actual users of the huge zero page, it would release it back to the system.

That seemed like a good solution; the cost of maintaining the huge zero page would only be paid when there were actual users to make that cost worthwhile. But, naturally, there was a problem. The reference count on that page is shared globally, so changes to it would bounce its cache line around the system. If a workload that created a lot of huge-page faults was running, that cache-line bouncing would measurably hurt performance. Such workloads were becoming increasingly common. As so often turns out to be the case, there was a need to eliminate that global sharing of frequently written data.

The solution to that problem was contributed to the 4.9 kernel by Aaron Lu in 2016. With this change, a process needing to take its first reference to the huge zero page would increment the reference count as usual, but it would also set a special flag (MMF_USED_HUGE_ZERO_PAGE) in its mm_struct structure. The next time that process needed the huge zero page, it would see that flag set, and simply use the page without consulting the reference count. The existence of the flag mean that the process already has a reference, so there is no need to take another one.

This change eliminated most of the activity on the global reference count. It also meant, though, that the kernel no longer knew exactly how many references to the huge zero page exist; the reference count now only tracks how many mm_struct structures contained at least one reference at some point during their existence. The only opportunity to decrease the reference count is when one of those mm_struct structures goes away — when the process exits, in other words. So the huge zero page may be kept around when it is not actually in use; all of the processes that needed it may have dropped their references, but the kernel cannot know that all of the references have been dropped as long as the processes themselves continue to exist.

That problem can be lived with; chances are that, as long as the processes that have used the huge zero page exist, at least one of them still has it mapped somewhere. But Lu's solution inherently ties the life cycle of the huge zero page to that of the mm_struct structures that used it. As a result, the huge zero page cannot be used for operations that are not somehow tied to an mm_struct. Filesystems are one example of a place where it would be useful to have a huge zero page; they often have to zero out large ranges of blocks on an underlying storage device. But buffered I/O operations happen independently of any process's life cycle; they cannot use the huge zero page without running the risk that it might be deallocated and reused before an operation completes.

That limitation may be about to go away. As Pankaj Raghav pointed out in this patch series, the lack of a huge zero page that is usable in the filesystem context makes the addition of large block size support to filesystems like XFS less efficient than it could be. To get around this problem, a way needs to be found to give the huge zero page an even more complex sort of life cycle that is not tied to the life cycle of any process on the system without reintroducing the reference-counting overhead that Lu's patch fixed.

Or, perhaps, the right solution is, instead, to do something much simpler. After renaming the huge zero page to the "huge zero folio" (reflecting how it has come to be used in any case), the patch series adds an option to just allocate the huge zero folio at boot time and keep it for the life of the system. The reference counting and marking of mm_struct structures is unnecessary in this case, so it is not performed at all, and the kernel can count on the huge zero folio simply being there whenever it is needed. This mode is controlled by the new PERSISTENT_HUGE_ZERO_FOLIO configuration option which, following standard practice, is disabled by default.

The acceptance of this series in the near future seems nearly certain. It simplifies a bit of complex logic, reduces reference-counting overhead even further, and makes the huge zero folio available in contexts where it could not be used before. The only cost is the inability to free the huge zero folio but, in current systems, chances are that this folio will be in constant use anyway. The evolution of hardware has, as a general rule, forced a lot of complexity into the software that drives it. Sometimes, though, newer hardware (and especially much larger memory capacity) also allows the removal of complexity that was driven by the constraints felt a decade or more ago.

Read the whole story
jepler
14 days ago
reply
why your linux system will probably be keeping 2MB of RAM filled with zeros at all times ..
Earth, Sol system, Western spiral arm
Share this story
Delete

[$] Arch shares its wiki strategy with Debian

1 Comment and 2 Shares
By Joe Brockmeier
August 12, 2025
DebConf

The Arch Linux project is especially well-known in the Linux community for two things: its rolling-release model and the quality of the documentation in the ArchWiki. No matter which Linux distribution one uses, the odds are that eventually the ArchWiki's documentation will prove useful. The Debian project recognized this and has sought to improve its own documentation game by inviting ArchWiki maintainers Jakub Klinkovský and Vladimir Lavallade to DebConf25 in Brest, France, to speak about how Arch manages its wiki. The talk has already borne fruit with the launch of an effort to revamp the Debian wiki.

[Jakub Klinkovský]

Klinkovský and Lavallade were introduced by Debian developer Thomas Lange, who said that he had the idea to invite the pair to DebConf. Klinkovský said that he had been a maintainer of the wiki since about 2014, and that he is also a package maintainer for Arch Linux. He added that he contributes to many other projects "wherever I can". For his part, Lavallade said that he has contributed to the wiki since 2021, but he had only recently joined the maintenance team: "I know just enough to be dangerous."

Lavallade said that the talk was a good opportunity to cross-pollinate with another distribution, and to do some self-reflection on how the wiki team operates. They would explain how the wiki is run using the SWOT analysis format, with a focus on the content and how the maintenance team keeps the quality of pages as high as it can. "SWOT", for those who have been fortunate enough not to have encountered the acronym through corporate meetings, is short for "strengths, weaknesses, opportunities, and threats". SWOT analysis is generally used for decision-making processes to help analyze the current state and identify what an organization needs to improve.

ArchWiki:About

The ArchWiki was established in 2004; the project originally used PhpWiki as its backend—but Klinkovský said that it was quickly migrated to MediaWiki, which is still in use today. The wiki maintenance and translation teams were established "about 2010". The maintenance team is responsible for the contribution guidelines, style conventions, organization, and anything else that contributors need to know.

Today, the wiki has more than 4,000 topic pages; it has close to 30,000 pages if one counts talk pages, redirects, and help pages. "We are still quite a small wiki compared to Wikipedia", Klinkovský said.

He displayed a slide, part of which is shown below, with graphs showing the number of edits and active users per month. The full set of slides is available online as well.

[ArchWiki today slide]

Since 2006, the wiki has had more than 840,000 edits by more than 86,000 editors; the project is averaging more than 2,000 edits by about 300 active contributors each month. Klinkovský noted that this "used to be quite a larger number".

Strengths

Lavallade had a short list of the "best user-facing qualities" of the ArchWiki, which are the project's strengths. The first was "comprehensive content and a very large coverage of various topics". He said this included not just how to run Arch Linux, but how to run important software on the distribution.

The next was having high-quality and up-to-date content. Given that Arch is a rolling-release distribution, he said, every page has to be updated to reflect the latest package provided with the distribution. That is only possible thanks to "a very involved community"; he noted that most of the edits on the ArchWiki were made by contributors outside the maintenance team.

All of that brought him to the last strength he wanted to discuss: its reach beyond the Arch community. He pulled up a slide that included a quote from Edward Snowden, which said:

Is it just me, or have search results become absolute garbage for basically every site? It's nearly impossible to discover useful information these days (outside the ArchWiki).

Contribution and content guidelines

The contribution guidelines and processes have a lot to do with the [Vladimir Lavallade] quality of the content on the wiki. Contributors, he said, have to follow three fundamental rules. The first is that they must use the edit summary to explain what has been done and why. The second rule is that contributors should not make complex edits all at once. As much as possible, Lavallade said, contributors should do "some kind of atomic editing" where each change is independent of the other ones. He did not go into specifics on this during the talk, but the guidelines have examples of best practices. The third rule is that major changes or rewrites should be announced on a topic's talk page to give others who are watching the topic a chance to weigh in.

The team also has three major content guidelines that Lavallade highlighted. One that is likely familiar to anyone contributing to technical documentation is the don't repeat yourself (DRY) principle. A topic should only exist in one place, rather than being repeated on multiple pages. He also said that the ArchWiki employed a "simple, but not stupid" approach to the documentation. This means that the documentation should be simple to read and maintain, but not offer too much hand-holding. Users also need to be willing to learn; they may need to read through more than one page to find the information they need to do something.

The final guideline is that everything is Arch-centric. Content on the site may be useful for users running different Linux distributions, and contributions are welcome that may apply to other distributions, but "something that will not work on Arch as-is is not something we will be hosting on our site". That, he said, allowed the maintenance team to be focused on the content Arch provides and helps to keep maintenance more manageable.

Maintenance

Speaking of maintenance, Klinkovský said, the project has tools and templates to help make life easier for contributors. A reviewer might apply an accuracy template, for instance, which will add it to a page that lists all content that has been flagged as possibly inaccurate. The templates are usually used and acted on by people, but the project also has bots that can add some templates (such as dead link) and even fix some problems.

The review process is an important part of maintenance, he said. Everyone can participate in review, not just the maintainers of the wiki. He explained that it was not possible for the maintenance team to review everything, so much of the review is done by people interested in specific topics who watch pages to see when changes were made. If people spot errors, they are empowered to fix them on their own, or to use the templates to flag them for others to address. Maintainers are there, he said, "to make some authoritative decisions when needed, and mediate disputes if they came up".

Klinkovský referred to watching and reviewing content on the wiki as "patrolling", and said there were some basic rules that should be followed, starting with "assume good faith". Most people do something because they think it is right; the maintainers rarely see outright vandalism on the wiki.

The second rule, he said, is "when in doubt, discuss changes with others before making a hasty decision". If a change must be reverted, then a reviewer should always explain why it was reverted. This gives the original contributor a chance to come back and fix the problem or address it in a different way. Lastly, Klinkovský said, they wanted to avoid edit wars: "the worst thing that can happen on a wiki is a few people just reverting their changes after each other".

Preventing edit wars and encouraging contributions was, Lavallade said, part of the broader topic of community management. The team tries to encourage contributors to not only make one change, but to learn the guidelines and keep contributing—and then help teach others the guidelines.

Arch has support forums, such as IRC, and when people ask for help there they are pointed to the wiki "because there is always the solution on the ArchWiki". In the rare event that the wiki does not have the solution, he said, "we gently point them to where the page with the content needs to be" and invite the user to add it even if it's not perfect the first time. That helps to reinforce the idea that the wiki is a collaborative work that everyone should feel welcome to add to.

Weaknesses

Lavallade said that the contribution model also illustrated one of ArchWiki's weaknesses: there is a lot to learn about contributing to the wiki, and newcomers can get tripped up. For example, he said that the DRY principle was difficult for new contributors. Or a newcomer might add a large chunk of content into a page that should be broken up into several pages.

The MediaWiki markup language is another hurdle for new contributors. He called the markup "antiquated", and acknowledged that the style conventions for the ArchWiki are not obvious either. It can take a lot of reading, cross-referencing, and back-and-forth discussions for a new contributor to make a content contribution correctly.

MediaWiki has a lot of strengths, Klinkovský said; it is battle-proven software, it is the de facto standard platform for wikis, and it has a nice API that can be used for integration with external applications such as bots. But MediaWiki is a weakness as well, he said. The platform is primarily developed for Wikipedia, and its developers are from Wikipedia. "Sometimes their decisions don't suit us", he said, and there was little way to make things exactly as the ArchWiki maintenance team might want.

The primary weakness, though, was that its markup language is "very weird and hard to understand both for humans and machines". In 2025, most people know and write Markdown daily, but MediaWiki markup is different. It is weird and fragile; changing a single token can completely break a page. It is also, he said, difficult to write a proper or robust parser for the language. This is particularly true because there is no formal specification of the language, just the reference implementation in the form of MediaWiki. That can change at any time: "so even if you had a perfect parser one day, it might not work the same or perfectly the next day".

Since ArchWiki is developed by volunteer contributors, its content is essentially driven by popularity; people generally only edit the content that they have an interest in. Klinkovský said that this was not a weakness, necessarily, but it was related to some weaknesses. For example, some pages were edited frequently while others were not touched for years due to lack of interest. To a reader, it is not obvious whether page content is stale or recently updated.

There is also no perfect way to ensure that content makes its way to the wiki. He noted that people might solve their problem in a discussion on Arch's forums, but that the solution might never end up on the wiki.

Opportunities and threats

Klinkovský said that they had also identified several areas of opportunity—such as community involvement and support tools for editors—where the ArchWiki's work could be improved.

Lavallade said that one example of community involvement would be to work with derivatives from Arch Linux, such as SteamOS or Arch ports to CPU architectures other than x86-64. Currently, Arch is only supported on x86-64, he noted, but the project has passed an RFC to expand the number of architectures that would be supported.

Right now, the project has two tools for editors to use to make their work a bit easier: wiki-scripts and Wiki Monkey. Klinkovský explained that wiki-scripts was a collection of Python scripts used to automate common actions, such as checking if links actually work. Wiki Monkey is an interactive JavaScript tool that runs in the web browser, he said, and can help contributors improve content. For example, it has plugins to expand contractions, fix headers, convert HTML <code> tags into proper MediaWiki markup, and more.

There is much more that could be added or improved, he said, like linting software for grammar issues. The team might also consider incorporating machine learning or AI techniques into the editor workflow, "but this needs to be done carefully so we don't cause more trouble than we have right now". The trouble the team has with AI right now will probably sound familiar to anyone running an open-source project today; specifically, AI-generated content that is not up to par and scraper bots.

People have already tried contributing to ArchWiki using AI, but Klinkovský pointed out that "current models are obviously not trained on our style guidelines, because the content does not fit". Using AI for problem solving also prevents people from fully understanding a solution or how things work. That may be a problem for the whole of society, he said, not just ArchWiki.

The scraper bot problem is a more immediate concern, in that the project had to put the wiki behind Anubis in the early part of the year for about two months. Currently they do not need to use it, Klinkovský said, but they have it on standby if the bots come back. "So this is still a threat and we cannot consider it solved."

Another, non-technical, threat that the project faces is burnout. Lavallade said that contributor burnout is a real problem, and that people who have stayed a long while "usually start with a good, strong string of changes, but they end up tapering their amount of contributions". Everyone, he said, ends up running out of steam at some point. Because of that, there is a need to keep bringing in new contributors to replace people who have moved on.

Questions

One member of the audience wanted to know if there was a dedicated chat room for the wiki to discuss changes coming in. Lavallade said that there is an #archlinux-wiki room on Libera.Chat, and anyone is welcome there. However, the team frequently redirects conversations about changes to the talk pages on the wiki to ensure that everyone interested in a topic can discuss the change.

Steve McIntyre had two questions. He was curious about how many maintainers the ArchWiki had and what kind of hardware or setup was on the backend of the wiki "is this like, one virtual machine, or a cluster?" Klinkovský said that there were about 30 to 50 maintainers at the moment. As far as the setup, he said he was not on Arch's DevOps team and didn't know all the details, but he did know it was just one virtual machine "in the cloud".

Another person wanted to know if the team would choose MediaWiki again if they were building the wiki today. Klinkovský did not quite answer directly, but he said that if a project does not like the markup language used by MediaWiki then it should look to a solution that uses Markdown. But, if a project needs all of the other features MediaWiki has, "like plugins or the API for writing bots and so on", then MediaWiki is the best from all of the wiki software available.

One audience member pointed out that the chart seemed to show a spike in activity beginning with COVID and a steady decline since. They asked if the team had noticed that, and what they were doing about it. Klinkovský said that they had not looked at that problem as a whole team, or discussed what they could do about it. He said that if Arch added new architectures or accepted contributions from Arch-derivative distributions, it might reverse the trend.

Lange closed the session by saying that he thought it was funny that the presenters had said they wanted ArchWiki to be Arch-centric: "I think you failed, because a lot of other people are reading your really great, big wiki".

Debian embraces MediaWiki

The session seems to have been a success in that it has helped to inspire the Debian project to revamp its own wiki. Immediately after the ArchWiki presentation, there was a Debian wiki BoF where it was decided to use MediaWiki. Debian currently uses the MoinMoin 1.9 branch, which depends on Python 2.7.

Since DebConf25, members of the wiki team have worked with the Debian's system administrators team to put up wiki2025.debian.org to eventually replace the current wiki. They have also created a new debian-wiki mailing list and decided to change the content licensing policy for material contributed to the wiki. Changes submitted to the wiki after July 24 are now licensed under the Creative Commons Attribution-ShareAlike 4.0 license unless otherwise noted.

If Debian can sustain the activity that has gone into the wiki revamp since DebConf25, its wiki might give the ArchWiki project a run for its money. In that case, given that ArchWiki has proven such a good resource for Linux users regardless of distribution, everybody will win.

[Thanks to the Linux Foundation, LWN's travel sponsor, for funding my travel to Brest for DebConf25.]

Read the whole story
jepler
14 days ago
reply
"In 2025, most people know and write Markdown daily"
Earth, Sol system, Western spiral arm
motang
14 days ago
I actually do all my documentation in markdown.
Share this story
Delete
Next Page of Stories