KmtestsHowto

From ReactOS Wiki
Jump to: navigation, search

Building and Running Tests

Quick Start

  • Run configure script with -DENABLE_ROSTESTS=1
  • Fire up your favorite build environment and build the kmtest_all target. One of:
 ninja kmtest_all
 make kmtest_all
 nmake/nologo kmtest_all
  • Put the resulting files (kmtest_.exe, kmtest_drv.sys, example_drv.sys and any other *_drv.sys files) into the same folder accessible by your ReactOS or Windows installation (a ReactOS Boot-CD will put them in ReactOS\bin) and run kmtest_.exe from a command prompt in that folder to see the available tests and options. Running a test might look as follows:
 kmtest_ Example

Gotchas

  • Do not run kmtests from a network drive. Windows will not accept this as a location for a driver.
  • When replacing kmtest_drv.sys on a live system, make sure to unload it first. Otherwise you might run the version already loaded into memory rather than your replacement.
 kmtest_ stop
  • If you want to start kmtest from a new location, make sure you delete the driver service first. The location of kmtest_drv.sys is saved in the registry and will not be updated until the service is deleted.
 kmtest_ delete

Build targets

The kmtest_all target is a meta-target that builds all files relevant to running kmtests. Running kmtest_all_clean has no effect.
This list of targets includes:

  • kmtest - builds kmtest_.exe, the user-mode front-end application
  • kmtest_drv - builds kmtest_drv.sys, the main driver containing the tests
  • <special-purpose-driver>_drv - builds special-purpose drivers required for some tests (e.g. example_drv.sys, iohelper_drv.sys, iodeviceobject_drv.sys). See the bottom of CMakeLists.txt for the full list.
  • kmtest_drivers - builds kmtest_drv.sys and all special-purpose drivers

Command line options

  • --list - show the list of tests
  • --list-all - show the list of tests, including hidden tests that are excluded from testbot runs because they may crash, require user interaction, or are otherwise unsuitable for automation
  • create - create the kmtest service. This is usually unnecessary as it is created automatically when running or listing tests
  • start - load the kmtest driver. This is usually unnecessary as it is started automatically when running or listing tests
  • stop - stop the kmtest driver. This is not done automatically (for performance reasons), but is required if you want to replace kmtest_drv.sys with an updated version
  • delete - delete the kmtest service. This is not done automatically, but is required if you want to change the path that kmtests are run from. Unless this command is executed (or the service otherwise deleted), the location where you first started kmtest_.exe will be remembered.

Using RosAutoTest

  • rosautotest.exe can be built/run like kmtest_.exe.
  • To run the whole list of not hidden tests, use
rosautotest kmtest
  • rosautotest will search for kmtest_.exe in a specific directory. One of:
%ROSAUTOTEST_DIR% if set
%SystemRoot%\bin if not
  • Beware: if there is multiple kmtest_*.exe in that directory, rosautotest will execute them all.

Test structure

Test naming and categories

Tests must be sorted into categories, according to the module (usually ntoskrnl/hal) they test, and possibly which part of the module. Each category has its own folder for the test sources to reside in, and a prefix to be appended to each test name.

The following categories already exist:

Example tests

  • Folder: example, Prefix: none

Functions from ntoskrnl, hal and corresponding macros or inline functions

  • Folder: ntos_ex, Prefix: Ex – Executive library functions
  • Folder: ntos_fsrtl, Prefix: FsRtl – File-System Runtime Library routines
  • Folder: ntos_io, Prefix: Io – I/O-Manager support functions
  • Folder: ntos_ke, Prefix: Ke – Core kernel support routines
  • Folder: ntos_mm, Prefix: Mm – Memory manager support functions
  • Folder: ntos_ob, Prefix: Ob – Object manager functions

Runtime Library functions and corresponding macros or inline functions

  • Folder: rtl, Prefix: Rtl

Simple kernel-mode tests

Simple tests are part of the main kmtest driver. They should be placed in a single source file named like the test in the appropriate category sub-folder. An example is provided by the Example test.

The steps required for such a test are:

  • add the source file to kmtests\CMakeLists.txt to be included in KMTEST_DRV_SOURCE
  • add the test to kmtests\kmtest_drv\testlist.c in alphabetic (ASCII) order
  • in the source file (testname.c), include <kmt_test.h>, then wrap your test in the START_TEST macro
 START_TEST(Minimal)
 {
     /* test goes here */
 }

Test parts in separate drivers

For tests which are unsuitable for the main kmtest driver (such as tests directly involving the driver object, or dispatch routines), special-purpose drivers can be used. kmtest_drv\kmtest_standalone.c provides a framework for such drivers. The test must provide a TestEntry and TestUnload functions and can register handlers for different events with the framework. See the Example_drv driver for an example.

The steps required for such a test are:

  • create a CMakeLists.txt file for the driver
  • add the driver to the kmtest_drivers target in the kmtests root CMakeLists.txt file.
  • add the directory containing the driver to kmtests\CMakeLists.txt
  • in the source file (testname_drv.c), include <kmt_test.h>, then provide at least the TestEntry and TestUnload functions.
  • create a user-mode part that calls the driver

User-mode test parts

Some tests require some interaction with a user-mode application (simple examples would be CreateFile or DeviceIoControl). This can be achieved using user-mode test parts, which are included in the kmtest.exe application.

In the presence of a user-mode test, that test part will always be called instead of the kernel-mode test with the same name. The user-mode part is then responsible for using KmtRunKernelTest as required to run the kernel-mode part.

User-mode tests are also responsible for loading and starting any tests in separate drivers. The Example_user test demonstrates the functions available for that purpose.

The steps required for such a test are:

  • add the source file to kmtests\CMakeLists.txt to be included in KMTEST_SOURCE
  • add the test to kmtests\kmtest\testlist.c in alphabetic (ASCII) order
  • in the source file (testname_user.c), include <kmt_test.h>, then wrap your test in the START_TEST macro as with kernel-mode tests.

RTL-type tests

Since Rtl functions can be used from both user and kernel mode, tests for these (or similar) can run in either mode. The special preprocessor symbol KMT_EMULATE_KERNEL should be defined in such tests to emulate any functions that are unavailable in user mode (such as IRQ-Level functions). The RtlMemory test is an example of such a test.

The steps required for this type of test are:

  • add the source file to kmtests\CMakeLists.txt to be included in COMMON_SOURCE.
  • add the test to kmtests\kmtest\testlist.c with the normal test name (alphabetic (ASCII) order)
  • add the test to kmtests\kmtest_drv\testlist.c (alphabetic (ASCII) order) with KM appended to the test name string to indicate the kernel-mode version
  • in the source file (testname.c), define KMT_EMULATE_KERNEL, then include <kmt_test.h>, and wrap your test in the START_TEST macro as usual.

Hidden tests

By prefixing a test name with a minus (-) in testlist.c, it will become "hidden", meaning a simple call to kmtest or kmtest --list will not show it. This is useful for tests that might crash or show other unexpected behavior (such as requiring user interaction) and are thus unsuitable for automated testing runs. A test with both user- and kernel-mode components must be hidden in both test lists, or in neither. For the test list sorting order, ignore the leading minus

Test syntax

Testing a condition

The basic primitive for writing tests is a condition check using the ok macro. The macro takes a condition that must be true for the test to succeed (similar to assert), and a message to be displayed on failure.

Size = 1024 * 1024 * 1024 * 2;
Pointer = ExAllocatePoolWithTag(NonPagedPool, Size, 'xxxx');
ok(Pointer == NULL, "Allocating %lu bytes of non-paged pool succeeded unexpectedly! Pointer = %p\n", Size, Pointer);


ok will automatically increase the count of tests performed, and also count failures. The source file name and line number of the ok invocation will be added to the failure message, so it need not contain any information included in that line. It should, however include any information against which the test failed (such as a return or variable value) and non-obvious values such as loop counters.

It is preferred that no tests ever fail on Windows.

Convenience macros

Several ok_* convenience macros for the most common conditions are also available.

ok_irql(PASSIVE_LEVEL);
ok_bool_false(KeAreAllApcsDisabled(), "KeAreAllApcsDisabled returned");
ok_eq_pointer(Something->ListEntry.Flink, &Something->ListEntry);
ok_eq_ulong(KeGetCurrentProcessorNumber(), 0);
ok_eq_int(Irp->RequestorMode, UserMode);
ok_eq_hex(Status, STATUS_ACCESS_VIOLATION);
ok_eq_str(AnsiName, "Ansi Name");
ok_eq_wstr(UnicodeName, L"Unicode Name");

Warning: Most of these macros will evaluate the expression passed to them twice! Do not call functions that have side-effects inside these macros!

ok_eq_long(InterlockedIncrement(&Variable), 5);

Instead, add a variable for the return value:

Ret = InterlockedIncrement(&Variable);
ok_eq_long(Ret, 5);

The ok_bool_true and ok_bool_false macros are notable exceptions that are safe to use.

Adding informational output

For adding any additional information that may be useful (for instance when debugging why a test fails), the trace macro can be used.

trace("Registry Path: %wZ\n", RegistryPath);

Note that it is preferred to do condition checks whenever possible, as these will provide results which can be automatically parsed.

Skipping tests

If some condition prevents specific tests from running, these tests can be skipped using the skip() macro. This is preferred over executing tests that might otherwise crash.

Pointer = ExAllocatePoolWithTag(PagedPool, PAGE_SIZE, 'xxxx');
ok(Pointer != NULL, "Out of memory\n");

if (!skip(Pointer != NULL, "Allocation failed\n"))
{
    /* do stuff that uses the memory */
    ExFreePoolWithTag(Pointer, 'xxxx');
}

skip() should be passed a condition which must be TRUE in order for the following test(s) to succeed. It will then return whether to skip the test(s).

An approach in line with the Winetest version can also be taken to prevent nesting:

Pointer = ExAllocatePoolWithTag(PagedPool, PAGE_SIZE, 'xxxx');

if (skip(Pointer != NULL, "Allocation failed\n"))
    goto done;

/* do stuff that uses the memory */

done:
if (Pointer)
    ExFreePoolWithTag(Pointer, 'xxxx');

Handling exceptions

Under some circumstances, especially when checking how a function handles invalid parameters, it can be useful to catch exceptions such as access violations. This is rarely the correct way to handle errors in drivers and applications, and cannot prevent more serious memory corruption issues. However, it can be useful in a test to easily demonstrate whether or not a pointer is dereferenced under certain circumstances.

The KmtStartSeh and KmtEndSeh are provided to save some of the typing associated with a structured exception handler. You simply specify the expected exception status (e.g. STATUS_SUCCESS for no exception, or STATUS_ACCESS_VIOLATION, STATUS_DATATYPE_MISALIGNMENT etc.)

KmtStartSeh()
    (void)FunctionWithoutANullCheck(NULL);
KmtEndSeh(STATUS_ACCESS_VIOLATION);
KmtStartSeh()
    Status = FunctionWithANullCheck(NULL);
KmtEndSeh(STATUS_SUCCESS);
ok_eq_hex(Status, STATUS_INVALID_PARAMETER);

Checking for OS build type

As many tests go near the border of what is "acceptable" behavior when calling API functions, the behavior differs between debug and release builds of the operating system. A checked build will have much stricter checks, and issue bugchecks for behavior that a free build will accept without any problem. To avoid system crashes and still allow these tests to be run on release builds, the offending pieces of code can be wrapped in an if statement checking the variable KmtIsCheckedBuild. This variable is available to all kernel-mode tests, and will be 1 for a checked (debug) build and 0 for a free (release) build at runtime.

The behavior of some synchronization techniques also varies greatly between build types. Synchronization is usually much simpler in a uniprocessor kernel compared to a multiprocessor kernel. The variable KmtIsMultiProcessorBuild can be used to check the kernel build type at runtime.

The KernelType test demonstrates the use of these variables.

Guarded Memory Allocations

To detect buffer overruns in the functions to be tested, guarded memory allocations can be used. A memory area allocated using KmtAllocateGuarded has a non-accessible page directly behind it, so that any buffer overrun will result in an access violation. Such an area can be freed using KmtFreeGuarded.

The GuardedMemory test is an example (and a test) for using guarded allocations.