| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
The library handle is no longer cached, since commit
9e37cca4f54260bd8c45a3041fcee00938c71649, so skip the LoadLibrary
call and just call find_library instead.
|
| |
|
| |
|
|
|
|
| |
This is the most reliable way to handle the race condition.
|
|
|
|
|
| |
EventLoop suffices for all of these cases. EventLoop(main=False) is
used for thread safety where API consumers may be using threads.
|
|
|
|
|
|
|
| |
These methods were aliases for the EventLoop io_add_watch and
source_remove methods. Migrating to the EventLoop method names allows
an EventLoop instance to substitute for a PollScheduler inside
subclasses of AbstractPollTask.
|
|
|
|
|
| |
Also add missing __slots__ to ForkProcess. TODO: Share code
between ForkProcess and MergeProcess.
|
|
|
|
| |
This will fix bug #374335.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
These cases should have been included with commit
6a94a074aa0475173a51f3f726377d4c407e986b.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we close all open file descriptors after a fork, with PyPy 1.8 it
triggers "[Errno 9] Bad file descriptor" later in the subprocess.
Apparently it is holding references to file descriptors and closing
them after they've already been closed and re-opened for other
purposes. As a workaround, we don't close the file descriptors, so
that they won't be re-used and therefore we won't be vulnerable to this
kind of interference.
The obvious caveat of not closing the fds is that the subprocess can
hold locks that belonged to the parent process, even after the parent
process has released the locks. Hopefully this won't be a major problem
though, since the subprocess has to exit at release the lock
eventually, when the EbuildFetcher or _MergeProcess task is complete.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
It seems saner to check for None, given that _elog_reader_fd is an int,
even though it will probably never be zero.
|
|
|
|
|
|
|
| |
Instead, finish the whole job using a copy of the currently running
instance. This allows us to avoid the complexities of emerge --resume,
such as the differences in option handling between different portage
versions, as reported in bug #390819.
|
|
|
|
|
|
|
|
| |
Since the io module in python-2.6 was broken when threading was
disabled, we needed to fall back from io.StringIO to StringIO.StringIO
in this case (typically just for Gentoo's stage1 and stage2 tarballs).
Now that python-2.7 is stable in stages and we rely on io.open() being
available, we can also rely on io.StringIO being available.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This fixes another ebuild-locks issue like the one fixed in commit
a81460175a441897282b0540cefff8060f2b92dc, but this time we use a
subprocess to ensure that the ebuild-locks for pkg_prerm and
pkg_postrm do not interfere with pkg_setup ebuild-locks held by
the main process.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This improves merge times by up to 25%, since looping over the vardb for
each package install is slow.
TEST=Emerge a bunch of packages, notice 25% speed improvement.
BUG=chromium-os:15112
Change-Id: I51dd617219cd1820ceeb702291bd790990995be4
|
|
|
|
|
|
|
|
|
| |
Since commit 7535cabdf2fab76fc55df83643157613dfd66be9,
vardbapi.flush_cache() is often called within subprocesses spawned
from MergeProcess. The _aux_cache_threshold doesn't work as designed
if the cache is flushed from a subprocess like this, can lead to the
vdb cache being flushed for every single merge. This is a waste of
disk IO, so disable vdb cache updates in subprocesses.
|
|
|
|
|
|
|
| |
It's important that this metadata access happens in the parent process,
since closing of file descriptors in the subprocess can prevent access
to open database connections such as that used by the sqlite metadata
cache module.
|
|
|
|
|
| |
Metadata cache queries may not work for some databases from within a
subprocess. For example, sqlite is known to misbehave.
|
|
|
|
|
| |
This code goes inside _start since it needs to execute in the parent
process.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
The unmerge output has been mixed together with the merge output since
commit 7535cabdf2fab76fc55df83643157613dfd66be9 because
dblink._scheduler was set to None. Now it's fixed to produce separate
logs like it used to.
|
| |
|
|
|
|
|
|
|
|
| |
In this subprocess we don't want PORTAGE_BACKGROUND to suppress
stdout/stderr output since they are pipes. We also don't want to open
PORTAGE_LOG_FILE, since it will already be opened by the parent
process, so we set the PORTAGE_BACKGROUND="subprocess" value for use
in conditional logging code involving PORTAGE_LOG_FILE.
|
|
|
|
|
|
|
|
| |
This allows for the scheduler to continue to run while packages are
being merged and installed, allowing for additional parallelism and
making better use of the CPUs.
Review URL: http://codereview.chromium.org/6713043
|
|
|
|
| |
Signal handlers inherited from the parent process are irrelevant here.
|
|
|
|
| |
This case is like bug #345289.
|
|
This allows the Scheduler to run in the main thread while files are
moved or copied asynchronously.
|