TASK(9) Kernel Developer's Manual TASK(9)

NAME

taskasynchronous actions scheduled from higher to lower priority

SYNOPSIS

#include <sys/task.h>

void
task_init(struct task *task, void (*fn)(struct task *));

void
task_destroy(struct task *task);

void
task_schedule(struct task *task);

void
task_done(struct task *task);

bool
task_cancel(struct task *task, kmutex_t *interlock);

bool
task_cancel_async(struct task *task);

void
task_drain(void);

DESCRIPTION

The task abstraction is provided to schedule actions to be run asynchronously. Tasks are usually scheduled from higher-priority contexts such as hardware interrupt handlers to run actions in lower-priority contexts such as threads.

Tasks are cheap: each task requires four words of memory, and the only interprocessor synchronization required to schedule a task is typically a single per-task atomic compare-and-swap and a single per-CPU mutex acquire/release, which is unlikely to induce contention.

Initializing and scheduling a task is much cheaper than creating a thread, which requires a large struct lwp record, a stack of its own, and bookkeeping in various associated data structures. The task abstraction maintains per-CPU pools of threads to run tasks in thread context without frequently creating threads or maintaining idle threads.

Initializing and scheduling a task is more expensive than scheduling a softint(9), which does not itself use interprocessor synchronization, but tasks are easier to use than softints for many purposes.

Task state is stored in the struct task structure. Callers of the task abstraction must allocate memory for struct task objects, but should consider them opaque, and should not inspect or copy them. Each task represented by a struct task object will be run only once at a time, until the action associated with it returns or calls task_done() to signal that it may be scheduled again.

Tasks scheduled with task_schedule() run in threads at priority PRI_NONE one at a time per CPU. Since they run one at a time, they are not allowed to sleep except on synchronization objects such as mutex(9) or rwlock(9). To run tasks at different priorities, or to allow long sleeps including memory allocation, you must use taskqueue(9) to create your own task queue.

FUNCTIONS

task_init(task, fn)
Initialize the task structure task, whose memory must be allocated by the caller, with the action fn. To change the action of a task, you must use task_destroy() first and then call task_init() again.

task_init() may be used in any context, including hard interrupt context.

task_destroy(task)
Destroy task, which may not be used again unless reinitialized with task_init. task must not be scheduled to run. If it may still be scheduled, you can use task_cancel() to cancel it. However, task_cancel_async() is not enough.

task_destroy() may be used in any context, including hard interrupt context.

task_schedule(task)
Schedule task to run as soon as possible. If task is already scheduled to run, this has no effect. If task has already begun to run but has not yet completed, this schedules it to run again as soon possible after it completes.

task_schedule() may be used in any context, including hard interrupt context, except at interrupt priority levels above IPL_VM.

task_done(task)
Mark task as done so that it can be scheduled again. After this, you may destroy task and free the memory containing it. If you do not call task_done(), the thread running task will continue to use it after the action returns.

task_done() may be called only by a task's action.

task_cancel(task, interlock)
Try to cancel task. If it is scheduled and successfully cancelled, return true. If it is not scheduled, or if it has already begun to run, return false. If task has already begun to run, task_cancel() will release interlock, wait for the task's action either to return or to call task_done(), and then re-acquire interlock. May sleep.

The interlock is provided so that if the task's action needs it and the caller of task_cancel() holds it, then task_cancel() can release the interlock after acquiring locks internal to the task abstraction in order to avoid racing or deadlocking with the task's action. You should always assume that interlock will be released and re-acquired, and recheck any invariants you rely on it to preserve.

task_cancel_async(task)
Try to cancel task like task_cancel(), but if it has already begun to run, then return immediately instead of waiting for it to complete.

task_cancel_async() may be used in any context, including hard interrupt context, except at interrupt priority levels above IPL_VM.

task_drain()
Wait for all non-delayed tasks that have been scheduled so far to return. Calling task_done() in the action is not enough: every action must return before task_drain() returns. This is an expensive global synchronization operation. May sleep.

Note: task_drain() does not wait for any delayed_task(9) tasks to complete.

EXAMPLES

The following code illustrates a typical device driver's interaction with tasks: setting them up on attach, draining and cancelling them on detach, and scheduling them on interrupts. It includes a contrived example of task_done() and task_cancel() and destroying a task in its action.

struct mydev_softc { 
	... 
	kmutex_t	sc_lock; 
	struct task	sc_task; 
	struct task	*sc_contrivedtask; 
	... 
}; 
 
static void 
mydev_attach(device_t parent, device_t self, void *aux) 
{ 
	struct mydev_softc *sc = device_private(self); 
	... 
	task_init(&sc->sc_task, &mydev_action); 
	sc->sc_contrivedtask = NULL; 
	... 
} 
 
static void 
mydev_detach(device_t self, int flags) 
{ 
	struct mydev_softc *sc = device_private(self); 
	... 
	task_drain(); 
	KASSERT(sc->sc_contrivedtask == NULL); 
	task_destroy(&sc->sc_task); 
	... 
} 
 
static void 
mydev_intr(void *arg) 
{ 
	struct mydev_softc *sc = arg; 
	... 
	/* If the hardware says there's stuff to do, schedule our task.  */ 
	mutex_enter(&sc->sc_lock); 
	if (ISSET(intrmask, MYDEV_A_NEW_DEVELOPMENT)) { 
		sc->sc_dostuff |= __SHIFTOUT(intrmask, MYDEV_STUFF); 
		task_schedule(&sc->sc_task); 
	} 
	mutex_exit(&sc->sc_lock); 
	... 
} 
 
static void 
mydev_action(struct task *task) 
{ 
	struct mydev_softc *sc = container_of(task, struct mydev_softc, 
	    sc_task); 
	uint32_t stuff = 0; 
 
        /* Grab the stuff to do and acknowledge we're doing it.  */ 
	mutex_enter(&sc->sc_lock); 
	if (sc->sc_dostuff) { 
		doit = sc->sc_dostuff; 
		sc->sc_dostuff = 0; 
	} 
	mutex_exit(&sc->sc_lock); 
 
	/* Do it.  */ 
	for (i = 0; i < MYDEV_NSTUFF; i++) 
		if (ISSET(stuff, (1U << i))) 
			... 
} 
 
static void 
mydev_doit(struct mydev_softc *sc) 
{ 
	... 
	/* Set up a task, if we need one.  */ 
	struct task *tmp = kmem_alloc(sizeof(*task), KM_SLEEP); 
	mutex_enter(&sc->sc_lock); 
	if (sc->sc_contrivedtask == NULL) { 
		sc->sc_contrivedtask = tmp; 
		tmp = NULL; 
		task_init(sc->sc_contrivedtask, &mydev_contrivedtask); 
		task_schedule(sc->sc_contrivedtask); 
	} 
	mutex_exit(&sc->sc_lock); 
	if (tmp != NULL) 
		kmem_free(tmp, sizeof(*tmp)); 
	... 
} 
 
static void 
mydev_nevermind(struct mydev_softc *sc) 
{ 
	struct task *task = NULL; 
 
	/* Cancel the task, if there is one.  */ 
	mutex_enter(&sc->sc_lock); 
	if (sc->sc_curtask != NULL) { 
		if (task_cancel(sc->sc_curtask, &sc->sc_lock)) { 
			/* We cancelled it, so we have to clean it up.  */ 
			task = sc->sc_curtask; 
			sc->sc_curtask = NULL; 
		} 
	} 
	mutex_exit(&sc->sc_lock); 
	if (task != NULL) { 
		task_destroy(task); 
		kmem_free(task, sizeof(*task)); 
	} 
} 
 
static void 
mydev_contrivedaction(struct task *task) 
{ 
 
	... 
	mutex_enter(&sc->sc_lock); 
	KASSERT(sc->sc_contrivedtask == task); 
	sc->sc_contrivedtask = NULL; 
	mutex_exit(&sc->sc_lock); 
	task_done(task); 
	task_destroy(task); 
	kmem_free(task, sizeof(*task)); 
}

CODE REFERENCES

The task abstraction is implemented in sys/kern/kern_task.c.

SEE ALSO

callout(9), delayed_task(9), softint(9), taskqueue(9), workqueue(9)
March 27, 2014 NetBSD 6.1_RC3