缩略图怎么做,缩略图生成功能?

整个项目在 https://github.com/ximikang/ffmpegThumbnail 发布
作者:ximikang
出处
:https://segmentfault.com/a/1190000039409543
生成缩略图的步骤

  1. 使用ffmpeg解码视频
  2. 帧格式转换
  3. 根据缩略图的数量从视频流中取帧
  4. 使用opencv建立画布并生成缩略图
ffmpeg解码视频根据缩略图的数量从视频流中取帧
  1. 获取图片之间的时间间隔
// Read media file and read the header information from container format
AVFormatContext* pFormatContext = avformat_alloc_context();
if (!pFormatContext) {
logging(\"ERROR could not allocate memory for format context\");
return -1;
}
if (avformat_open_input(&pFormatContext, inputFilePath.string().c_str(), NULL, NULL) != 0) {
logging(\"ERROR could not open media file\");
}
logging(\"format %s, duration %lld us, bit_rate %lld\", pFormatContext->iformat->name, pFormatContext->duration, pFormatContext->bit_rate);
cout << \"视频时常:\" <duration / 1000.0 / 1000.0 << \"s\" <duration;
int sum_count = rowNums * colNums;
//跳转的间隔 ms
int64_t time_step = video_duration / sum_count / 1000;
  1. 设置跳转时间获取不同的视频Packet
for (int i = 0; i = 0) {
if (pPacket->stream_index == video_stream_index) {
response = decode_packet_2mat(pPacket, pCodecContext, pFrame, tempImage);// 返回
}
if (response == 0)// 成功读取一帧
break;
if (response start_time, AVSEEK_FLAG_BACKWARD);
}
3.获取Frame
在固定的时间点可能无法获取从当前时间点的Packet获取对应的Frame , 所以需要对获取的Packet进行判断 , 如果没有获取到对应的Frame应该继续获取下一Packet直到获取到对应的Frame为止 。
static int decode_packet_2mat(AVPacket* pPacket, AVCodecContext* pCodecContext, AVFrame* pFrame, cv::Mat& image) {
int response = avcodec_send_packet(pCodecContext, pPacket);
if (response = 0) {
// return decoded out data from a decoder
response = avcodec_receive_frame(pCodecContext, pFrame);
if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
logging(\"averror averror_eof\");
break;
}
else if (response = 0) {
// 获取到Frame
image = frame2Mat(pFrame, pCodecContext->pix_fmt);
}
return 0;
}
}
帧格式转换由于从视频流获取的帧是YUV格式的Frame格式 , 后面使用opencv进行操作所以进行格式转换 。
先使用ffmpeg中的SwsContext将从视频中抽取到的帧从YUV转换到BGR格式 , 再从BGRFrame中的内存中获取原始数据 , 并转换到opencv的Mat类型 。
cv::Mat frame2Mat(AVFrame* pFrame, AVPixelFormat pPixFormat)
{
// image init
AVFrame* pRGBFrame = av_frame_alloc();
uint8_t* out_buffer = new uint8_t[avpicture_get_size(AV_PIX_FMT_BGR24, pFrame->width, pFrame->height)];
avpicture_fill((AVPicture*)pRGBFrame, out_buffer, AV_PIX_FMT_BGR24, pFrame->width, pFrame->height);
SwsContext* rgbSwsContext = sws_getContext(pFrame->width, pFrame->height, pPixFormat, pFrame->width, pFrame->height, AV_PIX_FMT_BGR24,SWS_BICUBIC, NULL, NULL, NULL);
if (!rgbSwsContext) {
logging(\"Error could not create frame to rgbframe sws context\");
exit(-1);
}
if (sws_scale(rgbSwsContext, pFrame->data, pFrame->linesize, 0, pFrame->height, pRGBFrame->data, pRGBFrame->linesize) width, pFrame->height), CV_8UC3);
mRGB.data = https://www.scwdwl.com/n/(uchar*)pRGBFrame->data[0];//注意不能写为:(uchar*)pFrameBGR->data
av_free(pRGBFrame);
sws_freeContext(rgbSwsContext);