Loop — Event loop

class Loop([flags=EVFLAG_AUTO, callback=None, data=None, io_interval=0.0, timeout_interval=0.0])
Parameters:
  • flags (int) – can be used to specify special behaviour or specific backends to use.
  • callback (callable or None) – if omitted or None the loop will fall back to its default behaviour of calling invoke() when required. If it is a callable, then the loop will execute it instead and it becomes the user’s responsibility to call invoke() to invoke pending events. See also callback.
  • data (object) – any Python object you might want to attach to the loop (will be stored in data).
  • io_interval (float) – see io_interval.
  • timeout_interval (float) – see timeout_interval.

Instanciates a new event loop that is always distinct from the default loop. Unlike the default loop, it cannot handle Child watchers, and attempts to do so will raise an Error.

One common way to use libev with threads is indeed to create one Loop per thread, and use the default loop (from default_loop()) in the “main” or “initial” thread.

start([flags])
Parameters:flags (int) – defaults to 0. See start() flags.
Return type:bool

This method usually is called after you have initialised all your watchers and you want to start handling events.

Returns False if there are no more active watchers (which usually means “all jobs done” or “deadlock”), and True in all other cases (which usually means you should call start() again).

Note

An explicit stop() is usually better than relying on all watchers being stopped when deciding if a program has finished (especially in interactive programs).

stop([how])
Parameters:how (int) – defaults to EVBREAK_ONE. See stop() how.

Can be used to make a call to start() return early (but only after it has processed all outstanding events).

invoke()

This method will simply invoke all pending watchers while resetting their pending state. Normally, the loop does this automatically when required, but when setting the callback attribute this call comes in handy.

reset()

This method sets a flag that causes subsequent loop iterations to reinitialise the kernel state for backends that have one. You can call it anytime you are allowed to start or stop watchers (except inside a Prepare callback), but it makes most sense after forking, in the child process. You must call it (or use EVFLAG_FORKCHECK) in the child before calling resume() or start(). Again, you have to call it on any loop that you want to re-use after a fork, even if you do not plan to use the loop in the parent.

In addition, if you want to reuse a loop (via this method or EVFLAG_FORKCHECK), you also have to ignore SIGPIPE.

On the other hand, you only need to call this method in the child process if and only if you want to use the event loop in the child. If you just fork() + exec*() or create a new loop in the child, you don’t have to call it at all.

now([update])
Parameters:update (bool) – defaults to False.
Return type:float

Returns the current “event loop time”, which is the time the event loop received events and started processing them. This timestamp does not change as long as callbacks are being processed, and this is also the base time used for relative timers. You can treat it as the timestamp of the event occurring (or more correctly, libev finding out about it).

When update is provided and True, establishes the current time by querying the kernel, updating the time returned by now() in the process. This is a costly operation and is usually done automatically within the loop. This parameter is rarely useful, but when some event callback runs for a very long time without entering the event loop, updating libev’s idea of the current time is a good idea.

suspend()
resume()

These two methods suspend and resume an event loop, for use when the loop is not used for a while and timeouts should not be processed. A typical use case would be an interactive program such as a game: when the user presses Control-z to suspend the game and resumes it an hour later it would be best to handle timeouts as if no time had actually passed while the program was suspended. This can be achieved by calling suspend() in your SIGTSTP handler, sending yourself a SIGSTOP and calling resume() directly afterwards to resume timer processing. Effectively, all Timer watchers will be delayed by the time spent between suspend() and resume(), and all Periodic watchers will be rescheduled (that is, they will lose any events that would have occurred while suspended).

After calling suspend() you must not call any method on the given loop other than resume(), and you must not call resume() without a previous call to suspend().

Note

Calling suspend()/resume() has the side effect of updating the event loop time (see update()).

decref()
incref()

decref()/incref() can be used to add or remove a reference count on the event loop: every watcher keeps one reference, and as long as the reference count is nonzero, the loop will not return on its own. This is useful when you have a watcher that you never intend to unregister, but that nevertheless should not keep the loop from returning. In such a case, call decref() after starting, and incref() before stopping it. As an example, libev itself uses this for its internal signal pipe: it is not visible to the user and should not keep the loop from exiting if no event watchers registered by it are active. It is also an excellent way to do this for generic recurring timers or from within third-party libraries. Just remember to decref() after start and incref() before stop (but only if the watcher wasn’t active before, or was active before, respectively. Note also that libev might stop watchers itself (e.g. non-repeating timers) in which case you have to incref() in the callback).

Note

These methods are not related to Python reference counting.

verify()

This method only does something when EV_VERIFY support has been compiled in (which is the default for non-minimal builds). It tries to go through all internal structures and checks them for validity. If anything is found to be inconsistent, it will print an error message to standard error and call abort(). This can be used to catch bugs inside libev itself: under normal circumstances, this method should never abort.

callback

The current invoke pending callback, its signature must be:

callback(loop)
Parameters:loop (Loop) – this loop.

This overrides the invoke pending functionality of the loop: instead of invoking all pending watchers when there are any, the loop will call this callback (use invoke() if you want to invoke all pending watchers). This is useful, for example, when you want to invoke the actual watchers inside another context (another thread, etc.).

If you want to reset the callback, set it to None.

Warning

Any unhandled exception happening during execution of this callback will stop the loop.

data

loop data.

io_interval
timeout_interval

These two attributes influence the time that libev will spend waiting for events. Both time intervals are by default 0.0, meaning that libev will try to invoke Timer/Periodic and Io callbacks with minimum latency. Setting these to a higher value (the interval must be >= 0.0) allows libev to delay invocation of Io and Timer/ Periodic callbacks to increase efficiency of loop iterations (or to increase power-saving opportunities). The idea is that sometimes your program runs just fast enough to handle one (or very few) event(s) per loop iteration. While this makes the program responsive, it also wastes a lot of CPU time to poll for new events, especially with backends like select(2) which have a high overhead for the actual polling but can deliver many events at once.

By setting a higher io_interval you allow libev to spend more time collecting Io events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both Periodic and Timer) will not be affected. Setting this to a non-zero value will introduce an additional sleep() call into most loop iterations. The sleep time ensures that libev will not poll for Io events more often than once per this interval, on average (as long as the host time resolution is good enough). Many (busy) programs can usually benefit by setting the io_interval to a value near 0.1 or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn’t make much sense to set it to a value lower than 0.01, as this approaches the timing granularity of most systems. Note that if you do transactions with the outside world and you can’t increase the parallelism, then this setting will limit your transaction rate (if you need to poll once per transaction and the io_interval is 0.01, then you can’t do more than 100 transactions per second).

Likewise, by setting a higher timeout_interval you allow libev to spend more time collecting timeouts, at the expense of increased latency/jitter/inexactness (the watcher callback will be called later). Io watchers will not be affected. Setting this to a non-zero value will not introduce any overhead in libev. Setting the timeout_interval can improve the opportunity for saving power, as the program will “bundle” timer callback invocations that are “near” in time together, by delaying some, thus reducing the number of times the process sleeps and wakes up again. Another useful technique to reduce iterations/wake-ups is to use Periodic watchers and make sure they fire on, say, one-second boundaries only.

default

Read only

True if the loop is the default loop, False otherwise.

backend

Read only

One of the backends flags indicating the event backend in use.

pending

Read only

The number of pending watchers.

iteration

Read only

The current iteration count for the loop, which is identical to the number of times libev did poll for new events. It starts at 0 and happily wraps around with enough iterations. This value can sometimes be useful as a generation counter of sorts (it “ticks” the number of loop iterations), as it roughly corresponds to Prepare and Check calls - and is incremented between the prepare and check phases.

depth

Read only

The number of times start() was entered minus the number of times start() was exited normally, in other words, the recursion depth. Outside start(), this number is 0. In a callback, this number is 1, unless start() was invoked recursively (or from another thread), in which case it is higher.

The following methods are implemented as a convenience, they allow you to instantiate watchers directly attached to the loop (although they do not take keyword arguments):

io(fd, events, callback[, data=None, priority=0])
Return type:Io
timer(after, repeat, callback[, data=None, priority=0])
Return type:Timer
periodic(offset, interval, callback[, data=None, priority=0])
Return type:Periodic
scheduler(reschedule, callback[, data=None, priority=0])
Return type:Scheduler
signal(signum, callback[, data=None, priority=0])
Return type:Signal
child(pid, trace, callback[, data=None, priority=0])
Return type:Child
idle(callback[, data=None, priority=0])
Return type:Idle
prepare(callback[, data=None, priority=0])
Return type:Prepare
check(callback[, data=None, priority=0])
Return type:Check
embed(other[, callback=None, data=None, priority=0])
Return type:Embed
fork(callback[, data=None, priority=0])
Return type:Fork
async(callback[, data=None, priority=0])
Return type:Async

Loop flags

behaviour

EVFLAG_AUTO

The default flags value.

EVFLAG_NOENV

If this flag bit is or’ed into the flags value (or the program runs setuid() or setgid()) then libev will not look at the environment variable LIBEV_FLAGS. Otherwise (the default), LIBEV_FLAGS will override the flags completely if it is found in the environment. This is useful to try out specific backends to test their performance, to work around bugs.

EVFLAG_FORKCHECK

Instead of calling reset() manually after a fork, you can also make libev check for a fork in each iteration by enabling this flag. This works by calling getpid() on every iteration of the loop, and thus this might slow down your event loop if you do a lot of loop iterations and little real work, but is usually not noticeable. The big advantage of this flag is that you can forget about fork (and forget about forgetting to tell libev about forking, although you still have to ignore SIGPIPE) when you use it. This flag setting cannot be overridden or specified in the LIBEV_FLAGS environment variable.

EVFLAG_SIGNALFD

When this flag is specified, then libev will attempt to use the signalfd(2) API for its Signal (and Child) watchers. This API delivers signals synchronously, which makes it both faster and might make it possible to get the queued signal data. It can also simplify signal handling with threads, as long as you properly block signals in your threads that are not interested in handling them. signalfd(2) will not be used by default as this changes your signal mask.

EVFLAG_NOSIGMASK

When this flag is specified, then libev will avoid modifying the signal mask. Specifically, this means you have to make sure signals are unblocked when you want to receive them. This behaviour is useful when you want to do your own signal handling, or want to handle signals only in specific threads and want to avoid libev unblocking the signals. It’s also required by POSIX in a threaded program, as libev calls sigprocmask(), whose behaviour is officially unspecified.

EVFLAG_NOTIMERFD

When this flag is specified, then libev will avoid using a timerfd to detect time jumps. It will still be able to detect time jumps, but takes longer and has a lower accuracy in doing so, but saves a file descriptor per loop. The current implementation only tries to use a timerfd when the first Periodic watcher is started and falls back on other methods if it cannot be created, but this behaviour might change in the future.

backends

EVBACKEND_SELECT

Availability: POSIX

The standard select(2) backend. Not completely standard, as libev tries to roll its own fd_set with no limits on the number of fds, but if that fails, expect a fairly low limit on the number of fds when using this backend. It doesn’t scale too well (O(highest_fd)), but is usually the fastest backend for a low number of (low-numbered) fds.

To get good performance out of this backend you need a high amount of parallelism (most of the file descriptors should be busy). If you are writing a server, you should accept() in a loop to accept as many connections as possible during one iteration. You might also want to have a look at io_interval to increase the amount of readiness notifications you get per iteration.

This backend maps EV_READ to the readfds set and EV_WRITE to the writefds set.

EVBACKEND_POLL

Availability: POSIX

The poll(2) backend. It’s more complicated than select(2), but handles sparse fds better and has no artificial limit on the number of fds you can use (except it will slow down considerably with a lot of inactive fds). It scales similarly to select(2), i.e. O(total_fds).

See EVBACKEND_SELECT for performance tips.

This backend maps EV_READ to POLLIN | POLLERR | POLLHUP, and EV_WRITE to POLLOUT | POLLERR | POLLHUP.

EVBACKEND_EPOLL

Availability: Linux

Use the linux-specific epoll(7) interface. For few fds, this backend is a little bit slower than poll(2) and select(2), but it scales phenomenally better. While poll(2) and select(2) usually scale like O(total_fds) where total_fds is the total number of fds (or the highest fd), epoll(7) scales either O(1) or O(active_fds).

While stopping, setting and starting an Io watcher in the same iteration will result in some caching, there is still a system call per such incident, so its best to avoid that. Also, dup()’ed file descriptors might not work very well if you register events for both file descriptors. Best performance from this backend is achieved by not unregistering all watchers for a file descriptor until it has been closed, if possible, i.e. keep at least one watcher active per fd at all times. Stopping and starting a watcher (without re-setting it) also usually doesn’t cause extra overhead. A fork can both result in spurious notifications as well as in libev having to destroy and recreate the epoll object (in both the parent and child processes), which can take considerable time (one syscall per file descriptor), is hard to detect, and thus should be avoided. All this means that, in practice, select(2) can be as fast or faster than epoll(7) for maybe up to a hundred file descriptors, depending on usage.

While nominally embeddable in other event loops, this feature is broken in all kernel versions tested so far.

This backend maps EV_READ and EV_WRITE the same way EVBACKEND_POLL does.

EVBACKEND_LINUXAIO

Availability: Linux

Use the Linux-specific Linux AIO (not aio(7) but io_submit(2)) event interface available in post-4.18 kernels (but libev only tries to use it in 4.19+).

This backend maps EV_READ and EV_WRITE the same way EVBACKEND_POLL does.

EVBACKEND_KQUEUE

Availability: most BSD clones

Due to a number of bugs and inconsistencies between BSDs implementations, kqueue is not being “auto-detected” unless you explicitly specify it in the flags or libev was compiled on a known-to-be-good (-enough) system like NetBSD. It scales the same way the epoll(7) backend does.

While stopping, setting and starting an Io watcher does never cause an extra system call as with EVBACKEND_EPOLL, it still adds up to two event changes per incident. Support for fork() is bad (you might have to leak fds on fork) and it drops fds silently in similarly hard to detect cases. This backend usually performs well under most conditions.

You can still embed kqueue into a normal poll(2) or select(2) backend and use it only for sockets (after having made sure that sockets work with kqueue on the target platform). See Embed watchers for more info.

This backend maps EV_READ into an EVFILT_READ kevent with NOTE_EOF, and EV_WRITE into an EVFILT_WRITE kevent with NOTE_EOF.

EVBACKEND_DEVPOLL

Availability: Solaris 8

This is not implemented yet (and might never be). According to reports, /dev/poll only supports sockets and is not embeddable, which would limit the usefulness of this backend immensely.

EVBACKEND_PORT

Availability: Solaris 10

This uses the Solaris 10 event port mechanism. It’s slow, but it scales very well (O(active_fds)). While this backend scales well, it requires one system call per active file descriptor per loop iteration. For small and medium numbers of file descriptors a “slow” EVBACKEND_SELECT or EVBACKEND_POLL backend might perform better.

On the positive side, this backend actually performed fully to specification in all tests and is fully embeddable.

This backend maps EV_READ and EV_WRITE the same way EVBACKEND_POLL does.

EVBACKEND_ALL

Try all backends (even potentially broken ones that wouldn’t be tried with EVFLAG_AUTO). Since this is a mask, you can do stuff such as:

EVBACKEND_ALL & ~EVBACKEND_KQUEUE

It is definitely not recommended to use this flag, use whatever recommended_backends() returns, or simply do not specify a backend at all.

EVBACKEND_MASK

Not a backend at all, but a mask to select all backend bits from a flags value, in case you want to mask out any backends from flags (e.g. when modifying the LIBEV_FLAGS environment variable).

start() flags

If flags is omitted or specified as 0, it will keep handling events until either no event watchers are active anymore or stop() was called.

EVRUN_NOWAIT

A flags value of EVRUN_NOWAIT will look for new events, will handle those events and any already outstanding ones, but will not wait and block your process in case there are no events and will return after one iteration of the loop. This is sometimes useful to poll and handle new events while doing lengthy calculations, to keep the program responsive.

EVRUN_ONCE

A flags value of EVRUN_ONCE will look for new events (waiting if necessary) and will handle those and any already outstanding ones. It will block your process until at least one new event arrives (which could be an event internal to libev itself, so there is no guarantee that a user-registered callback will be called), and will return after one iteration of the loop. This is useful if you are waiting for some external event in conjunction with something not expressible using other libev watchers. However, a pair of Prepare/Check watchers is usually a better approach for this kind of thing.

stop() how

EVBREAK_ONE

If how is omitted or specified as EVBREAK_ONE it will make the innermost start() call return.

EVBREAK_ALL

A how value of EVBREAK_ALL will make all nested start() calls return.