NOS
NOS is a collaborative audio-visual performance platform that creates a direct dialogue between sound and visual form through real-time translation. Developed as a partnership between NOHlab (Candas Sisman, Deniz Kader) and myself, NOS approaches audio-visual work as a unified medium where neither sound nor visuals dominate—instead creating a holistic perceptual experience where each sonic element finds expression in the visual realm. The platform centers around custom software I developed that functions as a visual instrument, analyzing incoming audio and translating different frequency ranges into visual parameters of generative art algorithms. This creates responsive environments where the shapes, movements, and colors directly correlate with musical elements, transforming performances into immersive sensory dialogues between musicians and visual artists.
NOS has been showcased at major international venues and festivals including Ars Electronica (Linz), Sonar Festival (Istanbul), Europalia Festival (Brussels), Signal Festival (Prague), and Borusan Contemporary Museum (Istanbul), collaborating with renowned artists such as Maki Namekawa, Jef Neve, and the Conservatoire Royal de Liège.
Concept
The core philosophy behind NOS is the pursuit of genuine synthesis between sound and image—treating audio-visual work as a new combined art form rather than simply adding visuals to music or soundtrack to visuals. This approach draws conceptual parallels to foley art in film, where every sound has a corresponding visual source, but inverts and abstracts this relationship into pure form. By creating a system where musical elements directly influence digital geometries, NOS reveals hidden patterns in musical expression and makes audible information visible.
The platform challenges conventional approaches to audio-visual performance by elevating the visual component from mere accompaniment to equal creative partner. Rather than functioning as a simple visualizer with predetermined effects, the NOS system creates visual compositions that could stand as independent artistic works, with the real-time sound responsiveness adding additional layers of depth and connection. This creates a true dialogue between musicians and visual artists, where each medium can both lead and respond to the other.
NOS explores how technology can function not just as documentation or reproduction tool but as an expressive instrument in its own right. The software becomes a performance interface that allows for artistic interpretation and improvisation, creating unique experiences that emerge from the specific context and moment of each performance.
Technical Details
At the heart of NOS is a custom software system I developed that employs a modular architecture of "visual engines"—generative art algorithms with exposed parameters that can be manipulated in real-time during performance. These engines create abstract geometric forms and movement patterns that form the visual vocabulary of the performance.
The audio analysis system employs Fourier transform with customized low-pass filters for different frequency ranges, allowing precise mapping between specific sonic elements and visual parameters. What distinguishes this approach from conventional audio-reactive systems is the ability to modulate the response characteristics—adjusting attack, decay, and sensitivity to create visual responses that range from immediate and energetic to subtle and ambient. This allows for nuanced artistic interpretation of the sound rather than merely mechanical translation.
The performance interface provides real-time control over how sound influences visuals—sometimes affecting the rotation speed of elements, other times their size, color, or movement patterns. This creates a coherent audio-visual language where every sound finds its visual counterpart, and allows visual performers to "play" the system in response to musical improvisation happening on stage.
The system architecture is designed for flexibility and can accommodate various input sources—from live instruments to electronic production, DJ sets, or recorded material—creating a versatile platform that adapts to different performance contexts while maintaining its core artistic approach.
Experience
NOS performances create immersive environments where audiences experience sound and visuals as unified sensory fields rather than separate channels of information. The direct correlation between what is heard and what is seen creates a heightened state of perception, where patterns and relationships become more immediately apparent.
The real-time nature of the system creates a sense of presence and immediacy that distinguishes these performances from pre-rendered content. Audiences witness the creative dialogue between musicians and visual artists unfold in the moment, with each performance becoming a unique expression that could never be exactly replicated.
Despite its technological foundation, NOS prioritizes the human element of performance. The system is designed not for automation but for expressive control, allowing performers to make interpretive decisions that shape the audience experience. This creates performances that feel both technically sophisticated and emotionally resonant—balancing computational precision with human intuition. The core software is available as an open-source project on GitHub, providing access to the technical framework while the NOS Visuals website showcases the artistic outcomes of various performances and collaborations.
Credits
Founders: NOHlab (Candas Sisman, Deniz Kader) and Osman Koç
Software Development: Osman Koç
Visual Design: NOHlab and Osman Koç
Performance Collaborations: Maki Namekawa, Jef Neve, Conservatoire Royal de Liège, AudioFil, Eser Karaca, Görkem Sen, and many others
NOS represents the ongoing exploration of audio-visual synthesis as a unified art form, demonstrating how real-time responsive systems can create meaningful connections between sound and image that transcend mere synchronization to achieve genuine artistic dialogue.