!!! note "" Originally published on the PyPy blog.
Thanks to the work that was recently done on the sys-prefix branch, it isnow possible to use virtualenv with PyPy.
!!! note "" Originally published on the PyPy blog.
Thanks to the work that was recently done on the sys-prefix branch, it isnow possible to use virtualenv with PyPy.
Starting next year, Android will require all apps to be registered by verified developers in order to be installed by users on certified Android devices. This creates crucial accountability, making it much harder for malicious actors to quickly distribute another harmful app after we take the first one down. Think of it like an ID check at the airport, which confirms a traveler's identity but is separate from the security screening of their bags; we will be confirming who the developer is, not reviewing the content of their app or where it came from.
AI-generated key points"), and a "flexible exception list" for the strict tracking protection feature that allows relaxing specific protections on sites that otherwise will not work properly.
Late last year we announced we’d retire Mu. The core maintainers have all moved onto other things, our lives have changed and the time we have available to maintain Mu has significantly decreased. Perhaps most of all, the world has moved on: when Mu started we were unique in the coding ecosystem. Now there are plenty of development environments focused on beginners.
We also promised we’d try to cut a final release.
Sadly, we’ve collectively decided we will not be able to do this.
Why?
Well, the cost of hosting websites (mostly the domain registration fees), the price of digital certificates for signing the installers, the annual fee to register for the privilege of participating on a platform (we’re looking at you Apple) and the time needed to investigate, refine and update code to work with the latest versions of other projects in the Python ecosystem are all inordinately expensive in terms of time and money. Were I (Nicholas) to pay all the financial burdens mentioned above, I estimate I’d have to pay around £1000. The cost in personal free time (that none of us have) for the development work is significant since this is deeply technical stuff we shoulder so you, the end user, don’t have to.
Yes, Mu is free software. No, Mu is not free software.
Let’s just say it’s complicated, shall we..? ;-)
Therefore the core maintainers have come to the decision to gently step away from Mu with immediate effect.
What happens next?
That’s it!
Wishing you all feelings of fulfilment as you flourish through your journey in code. We, the Mu core developers, sincerely hope you use your technical skills for fun things that enlarge our world in a humane, compassionate and thoughtful way.
Peace,
Carlos, Tiago, Tim, Vasco and Nicholas.
(The Mu core developers.)
There are many uses for a page full of zeroes. For example, any time that a process faults in a previously unused anonymous page, the result is a newly allocated page initialized to all zeroes. Experience has shown that, often, those zero-filled pages are never overwritten with any other data, so there is efficiency to be gained by having a single zero-filled page that is mapped into a process's virtual address space whenever a new page is faulted in. The zero page is mapped copy-on-write, so if the process ever writes to that page, it will take a page fault that will cause a separate page to be allocated in place of the shared zero page. Other uses of the zero page include writing blocks of zeroes to a storage device in cases where the device itself does not provide that functionality and the assembly of large blocks to be written to storage when data only exists for part of those blocks.
The advent of transparent huge pages added a new complication; now processes could fault in a PMD-sized (typically 2MB) huge page with a single operation, and the kernel had to provide a zero-filled page of that size. In response, for the 3.8 kernel release in 2012, Kirill Shutemov added a huge zero page that could be used in such situations. Now huge-page-size page faults could be handled efficiently by just mapping in the huge zero page. The only problem with this solution was that not all systems use transparent huge pages, and some only use them occasionally. When there are no huge-page users, there is no need for a zero-filled huge page; keeping one around just wastes memory.
To avoid this problem, Shutemov added lazy allocation of the huge zero page; that page would not exist in the system until an actual need for it was encountered. On top of that, he added reference counting that would keep track of just how many users of the huge zero page existed, and a new shrinker callback that would be invoked when the system is under memory pressure and looking to free memory. If that callback found that there were no actual users of the huge zero page, it would release it back to the system.
That seemed like a good solution; the cost of maintaining the huge zero page would only be paid when there were actual users to make that cost worthwhile. But, naturally, there was a problem. The reference count on that page is shared globally, so changes to it would bounce its cache line around the system. If a workload that created a lot of huge-page faults was running, that cache-line bouncing would measurably hurt performance. Such workloads were becoming increasingly common. As so often turns out to be the case, there was a need to eliminate that global sharing of frequently written data.
The solution to that problem was contributed to the 4.9 kernel by Aaron Lu in 2016. With this change, a process needing to take its first reference to the huge zero page would increment the reference count as usual, but it would also set a special flag (MMF_USED_HUGE_ZERO_PAGE) in its mm_struct structure. The next time that process needed the huge zero page, it would see that flag set, and simply use the page without consulting the reference count. The existence of the flag mean that the process already has a reference, so there is no need to take another one.
This change eliminated most of the activity on the global reference count. It also meant, though, that the kernel no longer knew exactly how many references to the huge zero page exist; the reference count now only tracks how many mm_struct structures contained at least one reference at some point during their existence. The only opportunity to decrease the reference count is when one of those mm_struct structures goes away — when the process exits, in other words. So the huge zero page may be kept around when it is not actually in use; all of the processes that needed it may have dropped their references, but the kernel cannot know that all of the references have been dropped as long as the processes themselves continue to exist.
That problem can be lived with; chances are that, as long as the processes that have used the huge zero page exist, at least one of them still has it mapped somewhere. But Lu's solution inherently ties the life cycle of the huge zero page to that of the mm_struct structures that used it. As a result, the huge zero page cannot be used for operations that are not somehow tied to an mm_struct. Filesystems are one example of a place where it would be useful to have a huge zero page; they often have to zero out large ranges of blocks on an underlying storage device. But buffered I/O operations happen independently of any process's life cycle; they cannot use the huge zero page without running the risk that it might be deallocated and reused before an operation completes.
That limitation may be about to go away. As Pankaj Raghav pointed out in this patch series, the lack of a huge zero page that is usable in the filesystem context makes the addition of large block size support to filesystems like XFS less efficient than it could be. To get around this problem, a way needs to be found to give the huge zero page an even more complex sort of life cycle that is not tied to the life cycle of any process on the system without reintroducing the reference-counting overhead that Lu's patch fixed.
Or, perhaps, the right solution is, instead, to do something much simpler. After renaming the huge zero page to the "huge zero folio" (reflecting how it has come to be used in any case), the patch series adds an option to just allocate the huge zero folio at boot time and keep it for the life of the system. The reference counting and marking of mm_struct structures is unnecessary in this case, so it is not performed at all, and the kernel can count on the huge zero folio simply being there whenever it is needed. This mode is controlled by the new PERSISTENT_HUGE_ZERO_FOLIO configuration option which, following standard practice, is disabled by default.
The acceptance of this series in the near future seems nearly certain. It simplifies a bit of complex logic, reduces reference-counting overhead even further, and makes the huge zero folio available in contexts where it could not be used before. The only cost is the inability to free the huge zero folio but, in current systems, chances are that this folio will be in constant use anyway. The evolution of hardware has, as a general rule, forced a lot of complexity into the software that drives it. Sometimes, though, newer hardware (and especially much larger memory capacity) also allows the removal of complexity that was driven by the constraints felt a decade or more ago.