Skip to main content

The Non Maskable Interrupt


I would like to take some time to discuss a wonderful coding tools that is provided by a number of modern chips makers. I am (of course) talking about the Non Maskable Interrupt (NMI).
In general, there are two System Registers that are used to manage system interrupts – the Interrupt Mask Register and the Interrupt Cause Register. The Interrupt Mask Register allows the root user to disable/enable specific interrupts. This register contains a bit for each interrupt type. The Interrupt Cause Register indicates when the interrupts are ready for service. This register also contains a bit for each interrupt type.
So when an interrupt comes into the system, the appropriate bit is set in the Interrupt Cause Register. If the appropriate bit is set in the Interrupt Mask Register, then the interrupt is generated and serviced by the appropriate Exception Handler. Otherwise, the system is not interrupted and the interrupt is essentially ignored.
Non Maskable Interrupt Defined
By definition, the Non Maskable interrupt is an interrupt that cannot be disabled. An NMI is periodically (and regularly) generated by the CPU. This functionality provides two benefits:
The first benefit is to detect a hung or wedged system. That is, a system that does not seem to be doing anything. This is often times the result of a very tight coding loop. In this case, a counter is added to the system scheduler. This counter is incremented every time the scheduler completes its main processing loop. If this counter does not get incremented over long periods of time, then the system is hung in some specific task and normal system processing has been suspended. The NMI Exception Handler can easily detect this and generate a panic. Additional counters and flags can then be added through out the system to ensure that system processing is proceeding in a healthy fashion. The NMI Exception Handler then can periodically check these counters and flags and thus detect when system processing is in an abnormal state.
Measuring System Performance & Code Coverage
The second benefit is to help measure system performance and code coverage. The Non Maskable Interrupt Exception Handler uses a Code Array to achieve this purpose. The kernel code space is divided into buckets, where each bucket is an entry in the Code Array. The size of a bucket is configurable. Each bucket is used to keep track of code hits. Thus each bucket keeps track of the number of times a section of code has been entered. Thus for a a bucket size of B, let PC be the program counter when an NMI occurs. Then the specific bucket is identified by (PC / B).
So to measure system performance and code coverage for a given test:
1) Clear the Code Array.
2) Run the test.
3) Examine the Code Array.
If we order the buckets by the number of items descending, then we have a clear indication of the code priority and usage during the test. This can be used to detect anomalies and to shine a spot light on the code.
So go forth and make use of this wonderful tool.


Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

Design of Large-Scale Services on Cloud Services PART 2

Decompose the Application by Workload Applications are typically composed of multiple workloads. Different workloads can, and often do, have different requirements, different levels of criticality to the business, and different levels of financial consideration associated with them. By decomposing an application into workloads, an organization provides itself with valuable flexibility. A workload-centric approach provides better controls over costs, more flexibility in choosing technologies best suited to the workload, workload specific approaches to availability and security, flexibility and agility in adding and deploying new capabilities, etc. Scenarios When thinking about resiliency, it’s sometimes helpful to do so in the context of scenarios. The following are examples of typical scenarios: Scenario 1 – Sports Data Service  A customer provides a data service that provides sports information. The service has two primary workloads. The first provides statistics for th...

How to construct a File System that lives in Shared Memory.

Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...