This article introduces information of handling the essential peripherals
in a C# software phone application using Ozeki VoIP SIP SDK support. If you read through
this guide carefully, you will be familiar with all the terms concerning speaker and microphone
handling in softphone applications.
When communicating with a software phone, you use the main devices of your computer even if
it is so natural that you don't even realize it. For voice communication, you need at least
a microphone and a speaker (and naturally a sound card) and for video phoning you will
need a camera (Figure 1). This article only contains audio peripheral handling. You can check
camera handling in another article.
Figure 1 - Softphone communication requires some basic peripheral devices
If you use Ozeki VoIP SIP SDK you will be sure that handling these peripherals is the
easiest thing in the world. However, if you want to write the support for these all on your own, it
will not be as easy as it seems. Programmers at Ozeki work hard to make your work as
comfortable as it could be, so you can sit back, call some methods, set some parameters
and voila, your softphone will work properly.
Now, let's consider the basic devices you will need to use with a softphone to understand why
they are so important and how they work.
What peripherals are used in a software phone?
The absolute basic peripherals for a software phone are the speaker and the microphone. Without
these you couldn't make a phone call, and that is the main feature of a softphone.
The microphone is an input device that collects the audio information from its physical
environment and digitizes it to an audio stream. The audio stream is the format that
can be played with a computer audio player program. The digitization and the eventual
compression of the audio data is made by using the driver of the microphone and some
Ozeki VoIP SIP SDK has a built-in support for all audio codecs and microphone handling
that you will need for your softphone application. You will not need to know all the
background activities or to understand the process. You only have to initialize a
microphone and call its methods as this article will show below.
The speaker is the essential audio output device of a computer. If you want to hear
the voice of your contact while calling, you will need to support a speaker. You can use
a headset instead of a speaker to avoid background noises and echoes during the calls, but
handling a headset is the same as handling a speaker, as the computer registers a headset a
The speaker receives an audio data stream from a program and produces voice data
to the physical environment. The transformation of the digital data to analog voice
and all the codecs and driver that are needed to be used are made by Ozeki VoIP
SIP SDK, so you only have to initialize the speaker and call the methods you need.
Ozeki VoIP SIP SDK classes for peripherals
Ozeki VoIP SIP SDK namespace Ozeki.Media.Mediahandlers contain the classes for
microphone and speaker handling. The video support and the WebCamera class will be in the
For being able to use these classes, you will need to register the Ozeki VoIP SIP
SDK to your application as it is show in Figure 2 and Figure 3.
You will need to right click on the References label on the Solution Explorer panel
that is usually on the right side of the Visual Studio window. Select Add Reference...
from the appearing list and the Add Reference window will appear.
Figure 2 - You can use the Solution Explorer panel for registering the Ozeki VoIP SIP SDK to your project
On the Add reference window, you can browse your file system for the .dll of the SDK and
click OK when it is selected(Figure 3). If you have installed your SDK without interaction
it will be found in "C:\Program Files\Ozeki\SDK" folder.
Figure 3 - You can browse for you VoIPSDK.dll on the Add Reference window
After registering your SDK to the project, you will be able to use all the support, tools
and methods it provides. For making your work even easier, you can add some
new lines to the using section of your program to be able to use the SDK names without
namespace problems (Code 1).
Code 1 - You need to add some extra lines to the using section for avoiding object labeling
As the Microphone and the Speaker classes are in the Ozeki.Media.MediaHandlers namespace,
the second line of Code 2 will ensure that you can use these names without labeling them with the
whole namespace path.
The Microphone class represents the audio input device
Class Microphone is the subclass of the VoIPMediaHandler and represents a microphone device.
A Microphone object can be created by calling a constructor without parameters or
you can specify the wave format for the microphone that means the audio file format that
the microphone captures. The default wave format of a Microphone object is .wav.
When initializing a Microphone object, it gets a device ID that is for identifying the microphone.
You can check this ID to inform your softphone about the currently used microphone. It is essential
to know the device ID when your computer has more than one microphone that can be used. You can also
change the currently used microphone by calling the ChangeDevice method with the
parameter of the device ID.
There are some useful methods in Microphone class that you can use for checking the level of
the microphone that is the strength of the microphone signal, to check or set if the microphone
is muted or the volume of the device.
The Microphone class also provides some EventHandlers for the events of stopping and
The most important methods of a Microphone object for you are the Start() and Stop()
methods. These are the ones you will use when you want to turn on and off the microphone.
There is no need for any parameters or setting, you only call these methods and connect
the microphone to another MediaHandler that works according to your purpose of calling or
The Speaker class contains all methods for the audio output
The Speaker class is the subclass of the VoIPMediaHandler and represents a speaker as
audio output device. You can initialize a speaker by calling a constructor without
any parameters or you can specify the wave format to be played. The default audio file
format for a Speaker object is .wav.
The Speaker object also has a device ID as you can use more than one audio output devices with your computer
and change between them with the ChangeDevice method that needs a deviceID as parameter.
You can get information about the level and the device ID of a Speaker and you can also check if
the speaker has been initialized. The SDK gives you the methods to check or set the volume
of a speaker and even to mute it.
The Speaker class contains the basic EventHandlers for the device that are for the events of
level change or stopping.
The most frequently used methods of this class will be the Start() and Stop() methods
that does not need any parameters to call. These methods handle the turning on and off the speaker.
You will only call these methods to make your speaker work and connect it to another
MediaHandler that works for your purposes of receiving a call or playing an audio file.
What can be a microphone used for?
Voice calling is the basic functionality of a softphone application. For making a voice call
you will need to have a Microphone and an PhoneCallAudioSender
and a MediaConnector to connect them. They are
initialized like in Code 2.
Microphone microphone = Microphone.GetDefaultDevice();
MediaConnector connector = new MediaConnector();
PhoneCallAudioSender mediaSender = new PhoneCallAudioSender();
Code 2 - You will need to initialize the input device and some other tools
In case of voice calling the MediaConnector object has to connect the Microphone
to the PhoneCallAudioSender to perform the task of calling. You will also need to start
the microphone and attack the mediaSender to the call (Code3).
Code 4 - Finishing a call means that all the devices has to be stopped
You also use the microphone when recording a .wav audio file. In this case you will
need a WaveStreamRecorder object and the microphone has to be connected to it.
The WaveStreamRecorder has to be initialized with the file name or path of the .wav
file to be recorded. You have to start the microphone and connect the Microphone
object to the WaveStreamRecorder (Code 5).
Code 5 - In case of .wav recording, you have to initialize a WaveStreamRecorder
After the initializing step, you can start, pause or stop the recording with the
Ozeki VoIP SIP SDK provided methods of the WaveStreamRecorder object:
StartStreaming(), PauseStreaming() and StopStreaming().
The difference between pausing and stopping recording is that in case of pausing, you can
continue recording to the same .wav file, while in case of stopping, the recording process will
be finished and the file will be finalized.
When you want to stop .wav recording, you need to call the Stop() method of the
microphone and disconnect the device from the WaveStreamRecorder (Code 7).
It is essential to call the Dispose() method of the WaveStreamRecorder object to make
the recorder finalize and release the file. This is made for avoiding further file collision.
Code 7 - If you want to stop recording, you need to finish all processes
Now you are familiar with all the functions of a microphone and you can use the
methods Ozeki VoIP SIP SDK provides for microphone handling. It is time to take a
look at the basic audio output device, the speaker.
How can you use a speaker?
The speaker as the most essential audio output device plays a great part in phone calling,
therefore it is important to be able to use it properly. Ozeki VoIP SIP SDK gives
a great support for this tool, so handling a speaker will not be a difficult mission.
Code 8 shows the essential initializing steps that have to be done for the audio output
operations. You will need to define a Speaker, a MediaConnector as in case of the microphone
and a PhoneCallAudioReceiver.
The MediaConnector object is the same as in the case of the microphone. Its only task is to connect
the MediaHandlers together. You will need only one MediaConnector in your
softphone application for this purpose.
Speaker speaker = new Speaker();
MediaConnector connector = new MediaConnector();
PhoneCallAudioReceiver mediaReceiver = new PhoneCallAudioReceiver();
Code 8 - The basic initialization steps for performing audio output
In case of an incoming call, you will need to start the speaker and connect it to the
mediaReceiver object in order to hear the voice of the caller. The mediaReceiver
also has to be attached to the call itself (Code 9).
Code 10 - You have to stop the output device when the call has been finished
The speaker also has a fundamental role in audio playing. When you want to play a
.wav audio file, you will need a WaveStreamPlayback object that has to be connected to the speaker.
The WaveStreamPlayback object has to be initialized with the filename or file path of
the .wav audio file to be played. Then you have to start the speaker and use your
MediaConnector object to connect the Speaker and the WaveStreamPlayback objects (Code 11).
WaveStreamPlayback = new WaveStreamPlayback(textBoxPlaybackFile.Text);
Code 11 - In case of audio playing, you will need to initialize a WaveStreamPlayback object and connect it to the Speaker
After the initialization steps, you can start, pause and stop streaming (aka. playing .wav)
by using the StartStreaming(), PauseStreaming() and StopStreaming() methods of the
The difference between stopping and pausing a stream is that in case of pausing,
you can restart the playing process from the point it has been paused, but in case of stopping,
you can only start playing the audio file from the beginning again.
When stopping the streaming, you will need to stop the speaker and disconnect it from the
WaveStreamPlayback object. You also need to call the Dispose() method that is for releasing the
.wav file to avoid file collision in the future.
Code 12 - The proper way of stopping a .wav playing contains some disconnections
At this point you know all information about voice calling and audio playing and recording.
The next step should be to extend your knowledge about video phoning and video support using
Ozeki VoIP SIP SDK.
Now, you are fully trained about all the audio peripheral handling in a softphone application.
It is time to take a step further and develop your own customized softphone application.
Further development possibilities
This sample program is only for handling one telephone line. However, Ozeki VoIP SIP
SDK offers the opportunity to develop programs for handling multiple telephone lines
simultaneously. Moreover further functions can also be implemented effectively like call forwarding
and chat function.
This article gave you information regarding all the peripheral devices
that can be used in an Ozeki VoIP SIP SDK supported softphone application. You are
now fully capable of building your own softphone solution and use the necessary functions
of the SDK. Now, it is time to take a step further and explore all the extraordinary
possibilities of Ozeki VoIP SIP SDK.