使用AudioQueue实现一个音频播放器

在系统的学习了AudioQueue相关姿势之后,我尝试使用AudioQueue做一个简单的音频播放器,包括播放,暂停,停止,快进快退,同时支持本地音频和网络音频等功能。

这里将整个流程分为下面几个模块:

  • AudioProperty – 用来保存音频的相关属性

  • AudioSource – 主要负责提供音频数据

    • LocalAudioSource 本地数据来源
    • NetAudioSource 网络数据来源
  • AudioStream – 主要负责对音频流的解析
  • AudioQueue – 主要负责音频播放
  • AudioPlayer – 播放器实体,负责管理上面的几个模块。

下面分模块来介绍下:

AudioProperty

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@protocol LLYAudioPropertyDelegate <NSObject>
- (void)audioProrperty_error:(NSError *)error;
- (void)audioProperty_statusChanged:(LLYAudioStatus)audioStatus;
@end
@interface LLYAudioProperty : NSObject
@property (nonatomic, assign) UInt64 fileSize;
@property (nonatomic, assign) UInt32 packetMaxSize;
@property (nonatomic, assign) void * magicData;
@property (nonatomic, assign) UInt32 cookieSize;
@property (nonatomic, assign) LLYAudioStatus status;
@property (nonatomic, strong) NSError *error;
@property (nonatomic, weak) id <LLYAudioPropertyDelegate> delegate;
@property (nonatomic, assign) AudioStreamBasicDescription audioDesc;
- (void)error:(LLYAudioError)errorType;
- (NSString *)errorDomain:(LLYAudioError)errorType;
- (void)clean;
@end

当每次音频的状态发生改变时,我们通过上面的代理将状态传给UI。

这里我们保存了fileSize和audioDesc,主要是用来计算音频的总时长,计算方法在之前已经介绍过了。

AudioSource

因为这个模块有两种类型,所以我们先定义一个父类,然后让子类去继承

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
@class LLYBaseAudioSource;
@protocol LLYAudioSourceDelegate <NSObject>
- (void)audioSource_fileType:(LLYBaseAudioSource *)curAudioSource fileType:(AudioFileTypeID)fileType;
- (void)audioSource_dataArrived:(LLYBaseAudioSource *)curAudioSource data:(NSData *)data contine:(BOOL)isContine;
- (void)audioSource_finished:(LLYBaseAudioSource *)curAudioSource error:(NSError *)error;
- (void)audioSource_shouldExit:(LLYBaseAudioSource*)currAudioData;
@end
@interface LLYBaseAudioSource : NSObject
@property (nonatomic, copy) NSString *urlStr;
@property (nonatomic, weak) id<LLYAudioSourceDelegate> delegate;
@property (nonatomic, assign) int audioVersion;
@property (nonatomic, strong) LLYAudioProperty *audioProperty;
- (void)start;
- (void)cancel;
- (void)seekToOffset:(UInt64)offset;
- (AudioFileTypeID)fileTypeWithFileExtension:(NSString *)fileExtension;
- (void)audioSourceError:(NSString *)errorDomain userInfo:(NSDictionary *)userInfo;
@end

在我们获取到原始的音频数据后,也是通过代理的方式将数据交给其它模块去处理。

具体看一下本地和网络数据获取的方式有何不同:

LLYLocalAudioSource
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
- (void)loadData{
if([[NSFileManager defaultManager] fileExistsAtPath:self.urlStr]){
NSError *error;
NSDictionary *fileAttDic = [[NSFileManager defaultManager] attributesOfItemAtPath:self.urlStr error:&error];
fileSize = [[fileAttDic objectForKey:NSFileSize] longValue];
if (fileSize > 0) {
self.audioProperty.fileSize = fileSize;
filehandle = [NSFileHandle fileHandleForReadingAtPath:self.urlStr];
currOffset = 0;
if (!fileTimer) {
fileTimer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(fileTimer_intval) userInfo:nil repeats:YES];
[[NSRunLoop currentRunLoop] run];
}
}
else{
[self audioSourceError:@"file read error" userInfo:nil];
}
}
else{
[self audioSourceError:@"file not exists" userInfo:nil];
}
}

这里我是使用了一个计时器去读取的本地数据,用while循环应该也是ok的。然后是整个数据读取操作都需要放在子线程去操作,因为如果放在主线程的话会阻塞当前线程,造成UI卡顿。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
- (void)fileTimer_intval{
if (exit) {
[filehandle closeFile];
filehandle=nil;
if (self.delegate) {
[self.delegate audioSource_shouldExit:self];
}
CFRunLoopStop([[NSRunLoop currentRunLoop] getCFRunLoop]);//必须停止,要不线程一直不会被释放
return;
}
if (!filehandle) {
return;
}
if (newOffset > 0) {
currOffset = newOffset;
}
UInt64 currReadLength = readLength;
if (currOffset + currReadLength > fileSize) {
currReadLength = fileSize - currOffset;
}
if (currOffset == 0) {
isContine = NO;
}
if (newOffset > 0){
[filehandle seekToFileOffset:newOffset];
newOffset = 0;
}
audioFileData = [filehandle readDataOfLength:currReadLength];
if (audioFileData && self.delegate) {
[self.delegate audioSource_dataArrived:self data:audioFileData contine:isContine];
}
currOffset += currReadLength;
if (currOffset >= fileSize) {
if (fileTimer) {
[fileTimer invalidate];
fileTimer=nil;
}
}
if (!isContine) {
isContine = YES;
}
if (!fileTimer) {
if (self.delegate) {
[self.delegate audioSource_finished:self error:nil];
[filehandle closeFile];
filehandle=nil;
}
}
}

在数据读取过程中,我们需要记录一下当前读取了多少数据,防止最后一次读取的数据不够,这里还有2个参数是seek相关的,isContine 和 newOffset ,当newOffset不为0时,我们需要将当前文件的读取偏移量seek到newOffset处,然后在继续读文件,同时,我们标记isContine为NO,通知其他模块,清空之前的音频读取相关记录,重新开始读取新的数据。

LLYNetAudioSource

网络音频和本地音频处理起来不太一样,因为它们使用的是不同的协议,本地文件可以看做file协议,网络音频则是http协议。不过原理上其实是一样的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
- (void)requestStart{
if (!self.audioTask) {
self.audioRequest = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:self.urlStr]];
if (seekOffset) {
[self.audioRequest setValue:[NSString stringWithFormat:@"bytes=%llu-",seekOffset] forHTTPHeaderField:@"Range"];
}
NSURLSessionConfiguration *config = [NSURLSessionConfiguration ephemeralSessionConfiguration];
self.audioSession = [NSURLSession sessionWithConfiguration:config delegate:self delegateQueue:[NSOperationQueue new]];
self.audioTask = [self.audioSession dataTaskWithRequest:self.audioRequest];
[self.audioTask resume];
NSLog(@"requestStart current thread %@",[NSThread currentThread]);
}
}
#pragma mark - NSURLSessionDataDelegate
- (void)URLSession:(NSURLSession *)session dataTask:(NSURLSessionDataTask *)dataTask
didReceiveResponse:(NSURLResponse *)response
completionHandler:(void (^)(NSURLSessionResponseDisposition disposition))completionHandler{
NSLog(@"didReceiveResponse current thread %@",[NSThread currentThread]);
fileSize = seekOffset + response.expectedContentLength;
seekOffset = 0;
self.audioProperty.fileSize = fileSize;
completionHandler(NSURLSessionResponseAllow);
}
- (void)URLSession:(NSURLSession *)session dataTask:(NSURLSessionDataTask *)dataTask
didReceiveData:(NSData *)data{
NSLog(@"didReceiveData current thread %@",[NSThread currentThread]);
if (self.delegate) {
if (currDataSize == 0) {
isContine = NO;
}
[self.delegate audioSource_dataArrived:self data:data contine:isContine];
currDataSize = currDataSize + data.length;
if (!isContine) {
isContine = YES;
}
}
}

同样的 ,这个数据请求过程也是放在子线程中进行。

这里,使用代理的方式而不是完成快的方式去请求数据,是因为音频数据相对来说还是有点大的,如果使用完成块的方式需要等待数据全部请求完成才会返回,等待时候会比较长,而且我们在开始播放音频时也并不需要全部的数据,边播放请求数据也比较符合正常的逻辑。

在发送请求前,也有一个seek相关的操作,如果当前是seek后第一次请求数据,通过设置http header中的Range的字段,请求seek后的数据。

LLYAudioStream

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
@protocol LLYAudioStreamDelegate <NSObject>
- (void)audioStream_readyToProducePackets;
- (void)audioStream_packets:(NSData *)data packetNum:(UInt32)packetCount packetDesc:(AudioStreamPacketDescription *)inPacketDesc;
@end
@interface LLYAudioStream : NSObject
@property (nonatomic, assign) AudioStreamBasicDescription audioDesc;
@property (nonatomic, assign) double duration;
@property (nonatomic, weak) id<LLYAudioStreamDelegate> delegate;
@property (nonatomic, strong) LLYAudioProperty *audioProperty;
@property (nonatomic, assign) UInt64 seekByteOffset;
@property (nonatomic, assign) double seekTime;
@property (nonatomic, assign) NSInteger audioVersion;
//- (instancetype)initWithFileType:(AudioFileTypeID)fileTypeID;
- (void)audioStreamParseBytes:(NSData *)data flags:(UInt32)flags;
- (void)getSeekToOffset:(double)seekToTime;
- (void)close;
@end

收到AudioSource的数据后,使用AudioStream来解析,然后把解析到的数据通过代理给AudioQueue,这就是AudoStream需要做的事情。

这里主要看一下下面几个方法:

计算seek的偏移量
1
2
3
4
5
6
7
8
9
10
11
- (void)getSeekToOffset:(double)seekToTime{
self.seekByteOffset = dataOffset +
(seekToTime / self.duration) * (_audioProperty.fileSize - dataOffset);
if (self.seekByteOffset > _audioProperty.fileSize - 2 * _audioProperty.packetMaxSize){
self.seekByteOffset = _audioProperty.fileSize - 2 * _audioProperty.packetMaxSize;
}
self.seekTime = seekToTime;
isSeeking=YES;
}
总时长
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
-(double)duration{
double calculatedBitRate = [self calculatedBitRate];
if (calculatedBitRate == 0 || _audioProperty.fileSize == 0)
{
return 0.0;
}
return (_audioProperty.fileSize-dataOffset) / (calculatedBitRate * 0.125);
}
- (double)calculatedBitRate
{
if (packetDuration && packetCount > BitRateEstimationMinPackets)
{
double averagePacketByteSize = packetDataSize / packetCount;
return 8.0 * averagePacketByteSize / packetDuration;
}
if (bitRate)
{
return (double)bitRate;
}
return 0;
}

AudioQueue模块和AudioPlayer模块相对来讲就比较简单了,之前的demo里面也使用过,就不一一介绍。

demo