bookmark_borderCustom Font Example iPhone

An example for using custom font in iPhone.

Initially, I have tried with UIFont for using custom font, but it seems not possible.

Documents from apple:
http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Reference/CGFont/Reference/reference.html

Create custom font using CGDataProvider:

NSString *fontPath = [[NSBundle mainBundle] pathForResource:@”CUSTOM_FONT” ofType:@”ttf”];
CGDataProviderRef fontDataProvider = CGDataProviderCreateWithFilename([fontPath UTF8String]); CGFontRef customFont = CGFontCreateWithDataProvider(fontDataProvider);

Use the font:

CGContextSetFont(context, customFont);
CGContextSetFontSize(context, 34.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGGlyph textToPrint[[mainString length]];
// Loop through the entire length of the text.
for (int i = 0; i < [mainString length]; ++i) { // Store each letter in a Glyph and subtract the MagicNumber to get appropriate value.
textToPrint[i] = [[mainString uppercaseString] characterAtIndex:i] + 332;
}

//to understand the MagicNumber, I open the font file using FontForge
//found that the font I am using started from location 32
//after I changed the “Encoding->Reencode” to “Glyph Order”
//I found that the “U+0020 space” started from location 3
//

bookmark_borderSpeech detection example iPhone

This is an example, how to start working on speech detection application for iPhone.

We have two major task:
1. Access the audio data.
2. Implement the logic part.

Task1: Access the audio data:
Download SpeakHere Example code from Apple:
http://developer.apple.com/iphone/library/samplecode/SpeakHere/index.html

Open “Classes->Play & Record->AQRecorder.mm” and edit the folloding function:

// AudioQueue callback function, called when an input buffers has been filled. void AQRecorder::MyInputBufferHandler(...)

Add the following code to access the audio data:

 //for signed 16-bit little-endian
 SInt16 *buf = (SInt16 *)inBuffer->mAudioData;
 for(int i=0; i< inBuffer->mAudioDataByteSize / 2; i=i+2) {
  printf("n%dn%d ", buf[i], buf[i+1]);
 }

(Please feel free to let me know if I am doing anything wrong…)

Run the application. You will get the raw data in gdb/console/log.
Copy the data in a text file and plot it.

Download gnuplot and AquaTerm

I have copied the data in a text file name “hello.txt” and saved it in my home directory.
Plot this, in gnuplot using the following command:

gnuplot>plot “hello.txt”

Task2: Implement the logic part:

For speech detection purpose we do not need to check all the data. Set some filter to check the data within some range. You can also try with the maximum and minimum value.


As the maximum and minumum value is similar, you can simply check one part.

If you want to detect specific type of speech or sound, analyze the graph by taking some sample.
Understand the logic and implement a coded logic to detect it.

For normal detection purpose take many sample and do R&D youself.

Please note that the mic “Input Volume” may affect your data.

bookmark_borderRadio App Example iPhone

I was working on an application to play audio stream from an online radio. Here is some hints for the developers who want to do similar types of work.

Initially we have two major task:
1. Read stream data.
2. Play the stream data.

From apple developer documentation:

http://developer.apple.com/iphone/library/codinghowtos/AudioAndVideo/index.html#STREAMING

How do I play streamed audio?

To play streamed audio, you connect to a network stream using the CFNetwork interfaces from Core Foundation, such as those in CFHTTPMessage. You then parse the network packets into audio packets using Audio File Stream Services (AudioToolbox/AudioFileStream.h). Finally, you play the audio packets using Audio Queue Services (AudioToolbox/AudioQueue.h). You can also use Audio File Stream Services to parse audio packets from an on-disk file.

Task 1: Working with HTTP stream

Read “CFNetwork Programming Guide” specially the “Communicating with HTTP Servers” part.

Use the callback

if (CFReadStreamSetClient(myReadStream, registeredEvents, myCallBack, &myContext) {...}

A sample callback:

static void myCallBack
(CFReadStreamRef stream, CFStreamEventType type, void *clientCallBackInfo) {
if(type == kCFStreamEventHasBytesAvailable) {
UInt8 buffer[2048];
CFIndex bytesRead = CFReadStreamRead(stream, buffer, sizeof(buffer));

if (bytesRead > 0) {
//nothing
}
else if (bytesRead) {
NSString* to_add = [NSString stringWithCString: (char*)buffer length: bytesRead];
NSLog(@"%@", to_add);
}
}
}

Run the application. If you can see the HTML tag in your debug console, then your network stream reading is working fine.

Task 2: Working with audio

Read “Audio File Stream Services Reference” to get basic idea about the streaming audio.

Check the “AudioFileStreamExample” from:
http://developer.apple.com/Mac/library/samplecode/AudioFileStreamExample/index.html

Merge the “AudioFileStreamParseBytes” with the “myCallBack” method.

Download Sample Code:
http://www.mymacbd.com/forum/viewtopic.php?id=26