25 Jun 2025

feedKubernetes Blog

Image Compatibility In Cloud Native Environments

In industries where systems must run very reliably and meet strict performance criteria such as telecommunication, high-performance or AI computing, containerized applications often need specific operating system configuration or hardware presence. It is common practice to require the use of specific versions of the kernel, its configuration, device drivers, or system components. Despite the existence of the Open Container Initiative (OCI), a governing community to define standards and specifications for container images, there has been a gap in expression of such compatibility requirements. The need to address this issue has led to different proposals and, ultimately, an implementation in Kubernetes' Node Feature Discovery (NFD).

NFD is an open source Kubernetes project that automatically detects and reports hardware and system features of cluster nodes. This information helps users to schedule workloads on nodes that meet specific system requirements, which is especially useful for applications with strict hardware or operating system dependencies.

The need for image compatibility specification

Dependencies between containers and host OS

A container image is built on a base image, which provides a minimal runtime environment, often a stripped-down Linux userland, completely empty or distroless. When an application requires certain features from the host OS, compatibility issues arise. These dependencies can manifest in several ways:

While containers in Kubernetes are the most likely unit of abstraction for these needs, the definition of compatibility can extend further to include other container technologies such as Singularity and other OCI artifacts such as binaries from a spack binary cache.

Multi-cloud and hybrid cloud challenges

Containerized applications are deployed across various Kubernetes distributions and cloud providers, where different host operating systems introduce compatibility challenges. Often those have to be pre-configured before workload deployment or are immutable. For instance, different cloud providers will include different operating systems like:

Each OS comes with unique kernel versions, configurations, and drivers, making compatibility a non-trivial issue for applications requiring specific features. It must be possible to quickly assess a container for its suitability to run on any specific environment.

Image compatibility initiative

An effort was made within the Open Containers Initiative Image Compatibility working group to introduce a standard for image compatibility metadata. A specification for compatibility would allow container authors to declare required host OS features, making compatibility requirements discoverable and programmable. The specification implemented in Kubernetes Node Feature Discovery is one of the discussed proposals. It aims to:

The concept has since been implemented in the Kubernetes Node Feature Discovery project.

Implementation in Node Feature Discovery

The solution integrates compatibility metadata into Kubernetes via NFD features and the NodeFeatureGroup API. This interface enables the user to match containers to nodes based on exposing features of hardware and software, allowing for intelligent scheduling and workload optimization.

Compatibility specification

The compatibility specification is a structured list of compatibility objects containing Node Feature Groups. These objects define image requirements and facilitate validation against host nodes. The feature requirements are described by using the list of available features from the NFD project. The schema has the following structure:

An example might look like the following:

version: v1alpha1
compatibilities:
- description: "My image requirements"
 rules:
 - name: "kernel and cpu"
 matchFeatures:
 - feature: kernel.loadedmodule
 matchExpressions:
 vfio-pci: {op: Exists}
 - feature: cpu.model
 matchExpressions:
 vendor_id: {op: In, value: ["Intel", "AMD"]}
 - name: "one of available nics"
 matchAny:
 - matchFeatures:
 - feature: pci.device
 matchExpressions:
 vendor: {op: In, value: ["0eee"]}
 class: {op: In, value: ["0200"]}
 - matchFeatures:
 - feature: pci.device
 matchExpressions:
 vendor: {op: In, value: ["0fff"]}
 class: {op: In, value: ["0200"]}

Client implementation for node validation

To streamline compatibility validation, we implemented a client tool that allows for node validation based on an image's compatibility artifact. In this workflow, the image author would generate a compatibility artifact that points to the image it describes in a registry via the referrers API. When a need arises to assess the fit of an image to a host, the tool can discover the artifact and verify compatibility of an image to a node before deployment. The client can validate nodes both inside and outside a Kubernetes cluster, extending the utility of the tool beyond the single Kubernetes use case. In the future, image compatibility could play a crucial role in creating specific workload profiles based on image compatibility requirements, aiding in more efficient scheduling. Additionally, it could potentially enable automatic node configuration to some extent, further optimizing resource allocation and ensuring seamless deployment of specialized workloads.

Examples of usage

  1. Define image compatibility metadata
    A container image can have metadata that describes its requirements based on features discovered from nodes, like kernel modules or CPU models. The previous compatibility specification example in this article exemplified this use case.

  2. Attach the artifact to the image
    The image compatibility specification is stored as an OCI artifact. You can attach this metadata to your container image using the oras tool. The registry only needs to support OCI artifacts, support for arbitrary types is not required. Keep in mind that the container image and the artifact must be stored in the same registry. Use the following command to attach the artifact to the image:

oras attach \ 
--artifact-type application/vnd.nfd.image-compatibility.v1alpha1 <image-url> \ 
<path-to-spec>.yaml:application/vnd.nfd.image-compatibility.spec.v1alpha1+yaml
  1. Validate image compatibility
    After attaching the compatibility specification, you can validate whether a node meets the image's requirements. This validation can be done using the nfd client:

nfd compat validate-node --image <image-url>

  1. Read the output from the client
    Finally you can read the report generated by the tool or use your own tools to act based on the generated JSON report.

validate-node command output

Conclusion

The addition of image compatibility to Kubernetes through Node Feature Discovery underscores the growing importance of addressing compatibility in cloud native environments. It is only a start, as further work is needed to integrate compatibility into scheduling of workloads within and outside of Kubernetes. However, by integrating this feature into Kubernetes, mission-critical workloads can now define and validate host OS requirements more efficiently. Moving forward, the adoption of compatibility metadata within Kubernetes ecosystems will significantly enhance the reliability and performance of specialized containerized applications, ensuring they meet the stringent requirements of industries like telecommunications, high-performance computing or any environment that requires special hardware or host OS configuration.

Get involved

Join the Kubernetes Node Feature Discovery project if you're interested in getting involved with the design and development of Image Compatibility API and tools. We always welcome new contributors.

25 Jun 2025 12:00am GMT

16 Jun 2025

feedKubernetes Blog

Changes to Kubernetes Slack

UPDATE: We've received notice from Salesforce that our Slack workspace WILL NOT BE DOWNGRADED on June 20th. Stand by for more details, but for now, there is no urgency to back up private channels or direct messages.

Kubernetes Slack will lose its special status and will be changing into a standard free Slack on June 20, 2025. Sometime later this year, our community may move to a new platform. If you are responsible for a channel or private channel, or a member of a User Group, you will need to take some actions as soon as you can.

For the last decade, Slack has supported our project with a free customized enterprise account. They have let us know that they can no longer do so, particularly since our Slack is one of the largest and more active ones on the platform. As such, they will be downgrading it to a standard free Slack while we decide on, and implement, other options.

On Friday, June 20, we will be subject to the feature limitations of free Slack. The primary ones which will affect us will be only retaining 90 days of history, and having to disable several apps and workflows which we are currently using. The Slack Admin team will do their best to manage these limitations.

Responsible channel owners, members of private channels, and members of User Groups should take some actions to prepare for the upgrade and preserve information as soon as possible.

The CNCF Projects Staff have proposed that our community look at migrating to Discord. Because of existing issues where we have been pushing the limits of Slack, they have already explored what a Kubernetes Discord would look like. Discord would allow us to implement new tools and integrations which would help the community, such as GitHub group membership synchronization. The Steering Committee will discuss and decide on our future platform.

Please see our FAQ, and check the kubernetes-dev mailing list and the #announcements channel for further news. If you have specific feedback on our Slack status join the discussion on GitHub.

16 Jun 2025 12:00am GMT

10 Jun 2025

feedKubernetes Blog

Enhancing Kubernetes Event Management with Custom Aggregation

Kubernetes Events provide crucial insights into cluster operations, but as clusters grow, managing and analyzing these events becomes increasingly challenging. This blog post explores how to build custom event aggregation systems that help engineering teams better understand cluster behavior and troubleshoot issues more effectively.

The challenge with Kubernetes events

In a Kubernetes cluster, events are generated for various operations - from pod scheduling and container starts to volume mounts and network configurations. While these events are invaluable for debugging and monitoring, several challenges emerge in production environments:

  1. Volume: Large clusters can generate thousands of events per minute
  2. Retention: Default event retention is limited to one hour
  3. Correlation: Related events from different components are not automatically linked
  4. Classification: Events lack standardized severity or category classifications
  5. Aggregation: Similar events are not automatically grouped

To learn more about Events in Kubernetes, read the Event API reference.

Real-World value

Consider a production environment with tens of microservices where the users report intermittent transaction failures:

Traditional event aggregation process: Engineers are wasting hours sifting through thousands of standalone events spread across namespaces. By the time they look into it, the older events have long since purged, and correlating pod restarts to node-level issues is practically impossible.

With its event aggregation in its custom events: The system groups events across resources, instantly surfacing correlation patterns such as volume mount timeouts before pod restarts. History indicates it occurred during past record traffic spikes, highlighting a storage scalability issue in minutes rather than hours.

The benefit of this approach is that organizations that implement it commonly cut down their troubleshooting time significantly along with increasing the reliability of systems by detecting patterns early.

Building an Event aggregation system

This post explores how to build a custom event aggregation system that addresses these challenges, aligned to Kubernetes best practices. I've picked the Go programming language for my example.

Architecture overview

This event aggregation system consists of three main components:

  1. Event Watcher: Monitors the Kubernetes API for new events
  2. Event Processor: Processes, categorizes, and correlates events
  3. Storage Backend: Stores processed events for longer retention

Here's a sketch for how to implement the event watcher:

package main

import (
 "context"
 metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 "k8s.io/client-go/kubernetes"
 "k8s.io/client-go/rest"
 eventsv1 "k8s.io/api/events/v1"
)

type EventWatcher struct {
 clientset *kubernetes.Clientset
}

func NewEventWatcher(config *rest.Config) (*EventWatcher, error) {
 clientset, err := kubernetes.NewForConfig(config)
 if err != nil {
 return nil, err
 }
 return &EventWatcher{clientset: clientset}, nil
}

func (w *EventWatcher) Watch(ctx context.Context) (<-chan *eventsv1.Event, error) {
 events := make(chan *eventsv1.Event)

 watcher, err := w.clientset.EventsV1().Events("").Watch(ctx, metav1.ListOptions{})
 if err != nil {
 return nil, err
 }

 go func() {
 defer close(events)
 for {
 select {
 case event := <-watcher.ResultChan():
 if e, ok := event.Object.(*eventsv1.Event); ok {
 events <- e
 }
 case <-ctx.Done():
 watcher.Stop()
 return
 }
 }
 }()

 return events, nil
}

Event processing and classification

The event processor enriches events with additional context and classification:

type EventProcessor struct {
 categoryRules []CategoryRule
 correlationRules []CorrelationRule
}

type ProcessedEvent struct {
 Event *eventsv1.Event
 Category string
 Severity string
 CorrelationID string
 Metadata map[string]string
}

func (p *EventProcessor) Process(event *eventsv1.Event) *ProcessedEvent {
 processed := &ProcessedEvent{
 Event: event,
 Metadata: make(map[string]string),
 }

 // Apply classification rules
 processed.Category = p.classifyEvent(event)
 processed.Severity = p.determineSeverity(event)

 // Generate correlation ID for related events
 processed.CorrelationID = p.correlateEvent(event)

 // Add useful metadata
 processed.Metadata = p.extractMetadata(event)

 return processed
}

Implementing Event correlation

One of the key features you could implement is a way of correlating related Events. Here's an example correlation strategy:

func (p *EventProcessor) correlateEvent(event *eventsv1.Event) string {
 // Correlation strategies:
 // 1. Time-based: Events within a time window
 // 2. Resource-based: Events affecting the same resource
 // 3. Causation-based: Events with cause-effect relationships

 correlationKey := generateCorrelationKey(event)
 return correlationKey
}

func generateCorrelationKey(event *eventsv1.Event) string {
 // Example: Combine namespace, resource type, and name
 return fmt.Sprintf("%s/%s/%s",
 event.InvolvedObject.Namespace,
 event.InvolvedObject.Kind,
 event.InvolvedObject.Name,
 )
}

Event storage and retention

For long-term storage and analysis, you'll probably want a backend that supports:

Here's a sample storage interface:

type EventStorage interface {
 Store(context.Context, *ProcessedEvent) error
 Query(context.Context, EventQuery) ([]ProcessedEvent, error)
 Aggregate(context.Context, AggregationParams) ([]EventAggregate, error)
}

type EventQuery struct {
 TimeRange TimeRange
 Categories []string
 Severity []string
 CorrelationID string
 Limit int
}

type AggregationParams struct {
 GroupBy []string
 TimeWindow string
 Metrics []string
}

Good practices for Event management

  1. Resource Efficiency

    • Implement rate limiting for event processing
    • Use efficient filtering at the API server level
    • Batch events for storage operations
  2. Scalability

    • Distribute event processing across multiple workers
    • Use leader election for coordination
    • Implement backoff strategies for API rate limits
  3. Reliability

    • Handle API server disconnections gracefully
    • Buffer events during storage backend unavailability
    • Implement retry mechanisms with exponential backoff

Advanced features

Pattern detection

Implement pattern detection to identify recurring issues:

type PatternDetector struct {
 patterns map[string]*Pattern
 threshold int
}

func (d *PatternDetector) Detect(events []ProcessedEvent) []Pattern {
 // Group similar events
 groups := groupSimilarEvents(events)

 // Analyze frequency and timing
 patterns := identifyPatterns(groups)

 return patterns
}

func groupSimilarEvents(events []ProcessedEvent) map[string][]ProcessedEvent {
 groups := make(map[string][]ProcessedEvent)

 for _, event := range events {
 // Create similarity key based on event characteristics
 similarityKey := fmt.Sprintf("%s:%s:%s",
 event.Event.Reason,
 event.Event.InvolvedObject.Kind,
 event.Event.InvolvedObject.Namespace,
 )

 // Group events with the same key
 groups[similarityKey] = append(groups[similarityKey], event)
 }

 return groups
}


func identifyPatterns(groups map[string][]ProcessedEvent) []Pattern {
 var patterns []Pattern

 for key, events := range groups {
 // Only consider groups with enough events to form a pattern
 if len(events) < 3 {
 continue
 }

 // Sort events by time
 sort.Slice(events, func(i, j int) bool {
 return events[i].Event.LastTimestamp.Time.Before(events[j].Event.LastTimestamp.Time)
 })

 // Calculate time range and frequency
 firstSeen := events[0].Event.FirstTimestamp.Time
 lastSeen := events[len(events)-1].Event.LastTimestamp.Time
 duration := lastSeen.Sub(firstSeen).Minutes()

 var frequency float64
 if duration > 0 {
 frequency = float64(len(events)) / duration
 }

 // Create a pattern if it meets threshold criteria
 if frequency > 0.5 { // More than 1 event per 2 minutes
 pattern := Pattern{
 Type: key,
 Count: len(events),
 FirstSeen: firstSeen,
 LastSeen: lastSeen,
 Frequency: frequency,
 EventSamples: events[:min(3, len(events))], // Keep up to 3 samples
 }
 patterns = append(patterns, pattern)
 }
 }

 return patterns
}

With this implementation, the system can identify recurring patterns such as node pressure events, pod scheduling failures, or networking issues that occur with a specific frequency.

Real-time alerts

The following example provides a starting point for building an alerting system based on event patterns. It is not a complete solution but a conceptual sketch to illustrate the approach.

type AlertManager struct {
 rules []AlertRule
 notifiers []Notifier
}

func (a *AlertManager) EvaluateEvents(events []ProcessedEvent) {
 for _, rule := range a.rules {
 if rule.Matches(events) {
 alert := rule.GenerateAlert(events)
 a.notify(alert)
 }
 }
}

Conclusion

A well-designed event aggregation system can significantly improve cluster observability and troubleshooting capabilities. By implementing custom event processing, correlation, and storage, operators can better understand cluster behavior and respond to issues more effectively.

The solutions presented here can be extended and customized based on specific requirements while maintaining compatibility with the Kubernetes API and following best practices for scalability and reliability.

Next steps

Future enhancements could include:

For more information on Kubernetes events and custom controllers, refer to the official Kubernetes documentation.

10 Jun 2025 12:00am GMT