Skip to content

渲染架构

我们将从底向上来描述安卓的渲染架构。

合成

合成是SurfaceFlinger主要由SurfaceFlinger进行,包括GPU和HWC两种方式结合,亦或单种方式。

GPU

Android 使用 OpenGL ES (GLES) API 渲染图形。为了创建 GLES 上下文并为 GLES 渲染提供窗口系统,Android 使用 EGL 库。GLES 调用用于渲染纹理多边形,而 EGL 调用用于将渲染放到屏幕上。

即,在安卓上:

  • 硬件加速的绘制操作,使用OpenGL ES(GLES)库
  • 系统原生窗口管理,使用EGL

EGL™ 是 Khronos 渲染 API(如 OpenGL ES 或 OpenVG)与底层原生平台窗口系统之间的接口。它处理图形上下文管理、表面/缓冲区绑定和渲染同步,并使用其他 Khronos API 实现高性能、加速、混合模式的 2D 和 3D 渲染。EGL 还提供 Khronos 之间的互操作功能,以实现 API 之间的数据高效传输——例如在运行 OpenMAX AL 的视频子系统和运行 OpenGL ES 的 GPU 之间。

再换句话说,绘制操作是OpenGL ES库,而绘制上下文、窗口、配置管理是EGL库。

HWC

在架构上,安卓并没有完全采用GPU一种形式上屏,而是还提供了HAL(硬件混合渲染器)。作为 HAL,其实现是特定于设备的,而且通常由显示硬件原始设备制造商 (OEM) 完成。

显示处理器功能差异很大。叠加层的数量(无论层是否可以旋转或混合)以及对定位和重叠的限制很难通过 API 表达。为了适应这些选项,HWC 会执行以下计算:

  • SurfaceFlinger 向 HWC 提供一个完整的层列表,并询问“您希望如何处理这些层?”
  • HWC 的响应方式是将每个层标记为设备或客户端合成。
  • SurfaceFlinger 会处理所有客户端,将输出缓冲区传送到 HWC,并让 HWC 处理其余部分。

由于硬件供应商可以定制决策代码,因此可以在每台设备上实现最佳性能。详见这里

由此,最终对显示器设备的FrameBuffer操作对于每一帧其实会分三种情况:

  • GLES - GPU 合成所有图层,直接写入输出缓冲区。HWC 不参与合成。
  • MIXED - GPU 将一些图层合成到帧缓冲区,由 HWC 合成帧缓冲区和剩余的图层,直接写入输出缓冲区。
  • HWC - HWC 合成所有图层并直接写入输出缓冲区。

对于HWC无法处理的层,需要SurfaceFlinger通过GPU以OpenGLES来合成。此时SurfaceFlinger 只是充当另一个 OpenGL ES 客户端。

VSync

该事件由HWC来生成,发送给SurfaceFlinger,详见.

生产与消费

SurfaceFlinger

SurfaceFlinger为应用的窗口创建“层”,层包括了BufferQueue以及SurfaceControl,前者用于提供一个跨进程、零拷贝的图形帧队列,后者提供一些控制相关操作。后者一般交由windowManagerSerivce管理,而前者一般最后封装为应用进程内的Surface也即ndk中的ANativeWindow,来由应用进程内负责提供生产数据。

BufferQueue

BufferQueue是一个连接图形数据的生产者、消费者的纽带,它按需分配图形缓冲区,比如生产者速度大于消费者时,需要分配更多图形缓冲区来入列。图形缓冲区由HAL的gralloc来分配,分配后的图形缓冲区以句柄的形式可以在binder IPC中跨进程传递。由此它自身不会产生拷贝操作。它提供三种工作模式:同步阻塞(生产者生产入列后阻塞等待消费者消费后可以获取新的缓冲区)、非阻塞(可以不断获取新的缓冲区直到上限时捕获错误)、丢弃(可以不断获取缓冲区,但仅最新的会被消费,之前未被消费的会被丢弃)。

BufferQueue队里中的图形缓冲区的数据核心是GraphicBuffer,而它也可以由ANativeWindow所访问。

应用进程生产

在应用进程中,生产图形数据可以有如下几种方式:

  • ViewRootImpl的Surface,会对应到View tree下的Java侧的Canvas API。
  • SurfaceView,申请一个独立的Surface。
  • TextureView,它需要SurfaceTexture,一种将Surface的图像生产转为GLES的纹理,并可被后续继续使用。

从本质上来说,始终是依靠Surface来构建图形生产。

Surface#lockHardwareCanvasSurface#lockCanvas来获得Canvas API,Canvas API是对Skia的Java侧包装。在native下,也可以直接用Skia。相关示例可以参见SurfaceView, TextureView

Surface侧获取CanvasAPI

Surface侧获取CanvasAPI,以最新的实现来看,会构建一个HardwareRenderer:

class Surface {
    ...
    private final class HwuiContext {
        private final RenderNode mRenderNode;
        private HardwareRenderer mHardwareRenderer;
        private RecordingCanvas mCanvas;
        private final boolean mIsWideColorGamut;

        HwuiContext(boolean isWideColorGamut) {
            mRenderNode = RenderNode.create("HwuiCanvas", null);
            mHardwareRenderer = new HardwareRenderer();
            mHardwareRenderer.setContentRoot(mRenderNode);
            mHardwareRenderer.setSurface(Surface.this, true);
            ...
        }

        Canvas lockCanvas(int width, int height) {
            mCanvas = mRenderNode.beginRecording(width, height);
            return mCanvas;
        }

        void unlockAndPost(Canvas canvas) {
            ...
            mRenderNode.endRecording();
            mHardwareRenderer.createRenderRequest()
                    .setVsyncTime(System.nanoTime())
                    .syncAndDraw();
        }
    ...
    }
}
它内部持有一个rootNode: RenderNode作为根节点,还会构造RenderProxy
    public HardwareRenderer() {
        mRootNode = RenderNode.adopt(nCreateRootRenderNode());
        mNativeProxy = nCreateProxy(!mOpaque, mRootNode.mNativeRenderNode);
        ...
    }
RenderProxy构造:
RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode,
                         IContextFactory* contextFactory)
        : mRenderThread(RenderThread::getInstance()), mContext(nullptr) {
    mContext = mRenderThread.queue().runSync([&]() -> CanvasContext* {
        return CanvasContext::create(mRenderThread, translucent, rootRenderNode, contextFactory);
    });
    mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode,
                              pthread_gettid_np(pthread_self()), getRenderThreadTid());
}
这里会创建CanvasContext, 并关联到DrawFrameTask。之后在绘制时,会由DrawFrameTask来执行:
void DrawFrameTask::run() {
    ...
    if (CC_LIKELY(canDrawThisFrame)) {
        dequeueBufferDuration = context->draw();
    }
    ...
}
再看CanvasContextdraw
nsecs_t CanvasContext::draw() {
    bool drew = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue,
                                      mContentDrawBounds, mOpaque, mLightInfo, mRenderNodes,
                                      &(profiler()));
    waitOnFences();

    bool didSwap =
            mRenderPipeline->swapBuffers(frame, drew, windowDirty, mCurrentFrameInfo, &requireSwap);
    ...
}
即工作是由mRenderPipeline承包。它是由系统属性决定创建的,同时也是在CanvasContext构造时就创建了的。我们来看SkiaOpenGLPipeline的处理:

bool SkiaOpenGLPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty,
                              const LightGeometry& lightGeometry,
                              LayerUpdateQueue* layerUpdateQueue, const Rect& contentDrawBounds,
                              bool opaque, const LightInfo& lightInfo,
                              const std::vector<sp<RenderNode>>& renderNodes,
                              FrameInfoVisualizer* profiler) {
    ...
    SkASSERT(mRenderThread.getGrContext() != nullptr);
    sk_sp<SkSurface> surface(SkSurface::MakeFromBackendRenderTarget(
            mRenderThread.getGrContext(), backendRT, this->getSurfaceOrigin(), colorType,
            mSurfaceColorSpace, &props));

    renderFrame(*layerUpdateQueue, dirty, renderNodes, opaque, contentDrawBounds, surface,
                SkMatrix::I());
    ...

    {
        ATRACE_NAME("flush commands");
        surface->flushAndSubmit();
    }
    ...
    return true;
}
如果你对skia有所了解,这里就很好懂了,这里会先构造一个SkSurface,它是由grContext作为后端的。之后进入renderFrame
void SkiaPipeline::renderFrame(const LayerUpdateQueue& layers, const SkRect& clip,
                               const std::vector<sp<RenderNode>>& nodes, bool opaque,
                               const Rect& contentDrawBounds, sk_sp<SkSurface> surface,
                               const SkMatrix& preTransform) {
    // Initialize the canvas for the current frame, that might be a recording canvas if SKP
    // capture is enabled.
    SkCanvas* canvas = tryCapture(surface.get(), nodes[0].get(), layers);

    // draw all layers up front
    renderLayersImpl(layers, opaque);

    renderFrameImpl(clip, nodes, opaque, contentDrawBounds, canvas, preTransform);

    endCapture(surface.get());

    Properties::skpCaptureEnabled = previousSkpEnabled;
}

这里主要是要对RenderNode nodes回放。而RenderNodes是由CanvasAPI根据绘制指令来操作的。swap简单看一下:

bool SkiaOpenGLPipeline::swapBuffers(const Frame& frame, bool drew, const SkRect& screenDirty,
                                     FrameInfo* currentFrameInfo, bool* requireSwap) {
    *requireSwap = drew || mEglManager.damageRequiresSwap();
...
    return *requireSwap;
}

bool EglManager::swapBuffers(const Frame& frame, const SkRect& screenDirty) {
    eglSwapBuffersWithDamageKHR(mEglDisplay, frame.mSurface, rects, screenDirty.isEmpty() ? 0 : 1);

    EGLint err = eglGetError();
    if (CC_LIKELY(err == EGL_SUCCESS)) {
        return true;
    }
    ...
}
其实和我们调用eglSwapBuffer(eglDisplay, eglSurface)已经类似了。最后看下CanvasAPI吧:
    public @NonNull RecordingCanvas beginRecording(int width, int height) {
        ...
        mCurrentRecordingCanvas = RecordingCanvas.obtain(this, width, height);
        return mCurrentRecordingCanvas;
    }

    static RecordingCanvas obtain(@NonNull RenderNode node, int width, int height) {
        ...
            canvas = new RecordingCanvas(node, width, height);
        return canvas;
    }
    private RecordingCanvas(@NonNull RenderNode node, int width, int height) {
        super(nCreateDisplayListCanvas(node.mNativeRenderNode, width, height));
        mDensity = 0; // disable bitmap density scaling
    }

static jlong android_view_DisplayListCanvas_createDisplayListCanvas(CRITICAL_JNI_PARAMS_COMMA jlong renderNodePtr,
        jint width, jint height) {
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    return reinterpret_cast<jlong>(Canvas::create_recording_canvas(width, height, renderNode));
}

Canvas* Canvas::create_recording_canvas(int width, int height, uirenderer::RenderNode* renderNode) {
    return new uirenderer::skiapipeline::SkiaRecordingCanvas(renderNode, width, height);
}

    explicit SkiaRecordingCanvas(uirenderer::RenderNode* renderNode, int width, int height) {
        initDisplayList(renderNode, width, height);
    }

void SkiaRecordingCanvas::initDisplayList(uirenderer::RenderNode* renderNode, int width,
                                          int height) {
    mCurrentBarrier = nullptr;
    SkASSERT(mDisplayList.get() == nullptr);

    if (renderNode) {
        mDisplayList = renderNode->detachAvailableList();
    }
    if (!mDisplayList) {
        mDisplayList.reset(new SkiaDisplayList());
    }

    mDisplayList->attachRecorder(&mRecorder, SkIRect::MakeWH(width, height));
    SkiaCanvas::reset(&mRecorder);
    mDisplayList->setHasHolePunches(false);
}
也就是,RenderNode#beginRecording最后得到的是Java侧RecordingCanvas实例,它持有一个SkiaRecordingCanvasnative的实例。SkiaRecordingCanvas则有如下继承:SkiaCanvas:Canvas
    explicit SkiaCanvas(const SkBitmap& bitmap);
    explicit SkiaCanvas(SkCanvas* canvas);
SkiaCanvas会持有一个SkCanvas,对于SkiaRecordingCanvas,其持有的SkCanvasRecordingCanvas:SkCanvasVirtualEnforcer<SkNoDrawCanvas>,它实际上继承于SkNoDrawCanvas

来看一个drawRect调用

    @Override
    public final void drawRect(float left, float top, float right, float bottom,
            @NonNull Paint paint) {
        nDrawRect(mNativeCanvasWrapper, left, top, right, bottom, paint.getNativeInstance());
    }
是如下方式被注册jni的:
static const JNINativeMethod gDrawMethods[] = {
    {"nDrawColor","(JII)V", (void*) CanvasJNI::drawColor},
    {"nDrawColor","(JJJI)V", (void*) CanvasJNI::drawColorLong},
    {"nDrawPaint","(JJ)V", (void*) CanvasJNI::drawPaint},
    {"nDrawPoint", "(JFFJ)V", (void*) CanvasJNI::drawPoint},
    {"nDrawPoints", "(J[FIIJ)V", (void*) CanvasJNI::drawPoints},
    {"nDrawLine", "(JFFFFJ)V", (void*) CanvasJNI::drawLine},
    {"nDrawLines", "(J[FIIJ)V", (void*) CanvasJNI::drawLines},
    {"nDrawRect","(JFFFFJ)V", (void*) CanvasJNI::drawRect},
    {"nDrawRegion", "(JJJ)V", (void*) CanvasJNI::drawRegion },
    {"nDrawRoundRect","(JFFFFFFJ)V", (void*) CanvasJNI::drawRoundRect},
    {"nDrawDoubleRoundRect", "(JFFFFFFFFFFFFJ)V", (void*) CanvasJNI::drawDoubleRoundRectXY},
    {"nDrawDoubleRoundRect", "(JFFFF[FFFFF[FJ)V", (void*) CanvasJNI::drawDoubleRoundRectRadii},
    {"nDrawCircle","(JFFFJ)V", (void*) CanvasJNI::drawCircle},
    {"nDrawOval","(JFFFFJ)V", (void*) CanvasJNI::drawOval},
    {"nDrawArc","(JFFFFFFZJ)V", (void*) CanvasJNI::drawArc},
    {"nDrawPath","(JJJ)V", (void*) CanvasJNI::drawPath},
    {"nDrawVertices", "(JII[FI[FI[II[SIIJ)V", (void*)CanvasJNI::drawVertices},
    {"nDrawNinePatch", "(JJJFFFFJII)V", (void*)CanvasJNI::drawNinePatch},
    {"nDrawBitmapMatrix", "(JJJJ)V", (void*)CanvasJNI::drawBitmapMatrix},
    {"nDrawBitmapMesh", "(JJII[FI[IIJ)V", (void*)CanvasJNI::drawBitmapMesh},
    {"nDrawBitmap","(JJFFJIII)V", (void*) CanvasJNI::drawBitmap},
    {"nDrawBitmap","(JJFFFFFFFFJII)V", (void*) CanvasJNI::drawBitmapRect},
    {"nDrawBitmap", "(J[IIIFFIIZJ)V", (void*)CanvasJNI::drawBitmapArray},
    {"nDrawGlyphs", "(J[I[FIIIJJ)V", (void*)CanvasJNI::drawGlyphs},
    {"nDrawText","(J[CIIFFIJ)V", (void*) CanvasJNI::drawTextChars},
    {"nDrawText","(JLjava/lang/String;IIFFIJ)V", (void*) CanvasJNI::drawTextString},
    {"nDrawTextRun","(J[CIIIIFFZJJ)V", (void*) CanvasJNI::drawTextRunChars},
    {"nDrawTextRun","(JLjava/lang/String;IIIIFFZJ)V", (void*) CanvasJNI::drawTextRunString},
    {"nDrawTextOnPath","(J[CIIJFFIJ)V", (void*) CanvasJNI::drawTextOnPathChars},
    {"nDrawTextOnPath","(JLjava/lang/String;JFFIJ)V", (void*) CanvasJNI::drawTextOnPathString},
    {"nPunchHole", "(JFFFFFF)V", (void*) CanvasJNI::punchHole}
};

int register_android_graphics_Canvas(JNIEnv* env) {
    int ret = 0;
    ret |= RegisterMethodsOrDie(env, "android/graphics/Canvas", gMethods, NELEM(gMethods));
    ret |= RegisterMethodsOrDie(env, "android/graphics/BaseCanvas", gDrawMethods, NELEM(gDrawMethods));
    ret |= RegisterMethodsOrDie(env, "android/graphics/BaseRecordingCanvas", gDrawMethods, NELEM(gDrawMethods));
    return ret;
}
实现:
static void drawRect(JNIEnv* env, jobject, jlong canvasHandle, jfloat left, jfloat top,
                     jfloat right, jfloat bottom, jlong paintHandle) {
    const Paint* paint = reinterpret_cast<Paint*>(paintHandle);
    get_canvas(canvasHandle)->drawRect(left, top, right, bottom, *paint);
}
即调用了SkiaRecordingCanvasdrawRect,而又调用了SkiaCanvas持有的SkCanvasdrawRect即native侧的RecordingCanvas:SkCanvasVirtualEnforcer<SkNoDrawCanvas>drawRect,而SkNoDrawCanvas:SkCanvas
void SkCanvas::drawRect(const SkRect& r, const SkPaint& paint) {
    TRACE_EVENT0("skia", TRACE_FUNC);
    // To avoid redundant logic in our culling code and various backends, we always sort rects
    // before passing them along.
    this->onDrawRect(r.makeSorted(), paint);
}
但native侧的RecordingCanvas复写了onDrawRect
void RecordingCanvas::onDrawRect(const SkRect& rect, const SkPaint& paint) {
    fDL->drawRect(rect, paint);
}
void DisplayListData::drawRect(const SkRect& rect, const SkPaint& paint) {
    this->push<DrawRect>(0, rect, paint);
}
DisplayListData::push是模板方法,而DrawRect类则是drawRect操作对应的记录数据结构:
struct DrawRect final : Op {
    static const auto kType = Type::DrawRect;
    DrawRect(const SkRect& rect, const SkPaint& paint) : rect(rect), paint(paint) {}
    SkRect rect;
    SkPaint paint;
    void draw(SkCanvas* c, const SkMatrix&) const { c->drawRect(rect, paint); }
};

最后RenderNode是用于硬件加速时构造View结构的节点。主要包含DisplayList和一组属性(描述位置、缩放、透明等).每个View在构造时都也会构造RenderNode

总的来看,CanvasAPI整体还是和Skia绑定比较紧。