iOS音视频编辑系列-音视频合成

相关类

AVComposition && AVCompositionTrack

composition在音视频处理中起到最关键的大管家作用,基本上所有操作都需要交给composition进行,而且composition是asset的子类,可以直接用avplayer进行播放,也就是说编辑后是可以直接预览的和导出的,非常方便。

track是可以理解为音视频数据的存储通道,音视频数据通过track进行存储,视频编辑也是在track上进行,比如剪辑一段视频的话,先拿到该视频所在的track,然后指定剪辑的时间范围,最后将选定的内容导出即可。

CMTime && CMTimeRange

cmtime是视频编辑中最基本的单位,表示一个时间节点,比如视频时长duration等。cmtime结构中最主要的属性是value和timescale,value/timescale才是cmtime真正标识的值。

cmtimerange标识一个时间区间,start标识该范围的起点,duration标识该范围的长度,这两个属于一起决定了该段时间范围。

具体编码

这里简单写一个demo,取01.mp4和02.mp4两个视频的前6s,再添加一个03.m4a的音频,然后合成导出一个新的视频。

step0 加载原始资源
1
2
3
4
5
6
7
8
let url01 = Bundle.main.url(forResource: "01", withExtension: "mp4")
let asset01 = AVURLAsset(url: url01!)
let url02 = Bundle.main.url(forResource: "02", withExtension: "mp4")
let asset02 = AVURLAsset(url: url02!)
let url03 = Bundle.main.url(forResource: "03", withExtension: "m4a")
let asset03 = AVURLAsset(url: url03!)
step1 创建composition和track
1
2
3
4
5
/// compposition
let composition = AVMutableComposition.init()
/// video track
let videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
step02 裁剪视频
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// segment
var cursorTime = CMTime.zero
let videoDuration = CMTimeMake(value: 6, timescale: 1)
let videoTimeRange = CMTimeRangeMake(start: cursorTime, duration: videoDuration)
// video asset track
let track01 = asset01.tracks(withMediaType: .video).first
do {
try videoTrack?.insertTimeRange(videoTimeRange, of: track01!, at: cursorTime)
} catch let error as NSError {
print("error when adding video to mix = \(error)")
}
cursorTime = CMTimeAdd(cursorTime, videoDuration)
let track02 = asset02.tracks(withMediaType: .video).first
do {
try videoTrack?.insertTimeRange(videoTimeRange, of: track02!, at: cursorTime)
} catch let error as NSError {
print("error when adding video to mix = \(error)")
}
step03 添加音频
1
2
3
4
5
6
7
8
9
cursorTime = CMTime.zero
let audioDuration = composition.duration
let audioTimeRange = CMTimeRangeMake(start: cursorTime, duration: audioDuration)
let track03 = asset03.tracks(withMediaType: .audio).first
do {
try audioTrack?.insertTimeRange(audioTimeRange, of: track03!, at: cursorTime)
} catch let error as NSError {
print("error when adding audio to mix = \(error)")
}
step04 效果预览
1
2
3
4
5
6
self.playerItem = AVPlayerItem.init(asset: composition)
self.player = AVPlayer.init(playerItem: self.playerItem!)
self.playerLayer = AVPlayerLayer.init(player: self.player!)
self.playerLayer?.frame = self.view.bounds
self.view.layer.addSublayer(self.playerLayer!)
self.player?.play()
step05 视频导出到文件和相册
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
let path = NSTemporaryDirectory().appending("tmp.mp4")
if FileManager.default.fileExists(atPath: path) {
do {
try FileManager.default.removeItem(atPath: path)
}
catch {
print("Temporary file removing error.")
}
}
// 导出到文件
let outputUrl = URL.init(fileURLWithPath: path)
let exportSession = AVAssetExportSession.init(asset: composition, presetName: AVAssetExportPresetHighestQuality)
exportSession?.outputURL = outputUrl
exportSession?.outputFileType = .mp4
exportSession?.shouldOptimizeForNetworkUse = true
exportSession?.exportAsynchronously(completionHandler: {
switch exportSession?.status {
case .none: break
case .some(.waiting): break
case .some(.exporting): break
case .some(.completed):
// 导入到相册
PHPhotoLibrary.shared().performChanges {
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: outputUrl)
} completionHandler: { (success, error) in
if success {
print("导入相册成功")
}
}
break
case .some(.failed): break
case .some(.cancelled): break
case .some(_): break
}
})

一个简单的合成视频就完成了,可以进相册看看效果,原视频的音频丢了,取代的是我们添加的音频,现在两段拼接的视频之间还没有加转场动画,下一步可以给两段视频之间加一下过渡动画。