Files
XCEngine/docs/reference/rendering_module_overview_rendergraph_srp_2026-04-17.md

83 KiB
Raw Blame History

XCEngine 当前渲染模块详解:从 SceneRenderer 到 RenderGraph再到 SRP 主线

这份文档是给谁看的

这份文档是写给“引擎开发新手”看的。

目标不是炫术语,而是把你现在仓库里这条真实存在的渲染主链讲清楚:

  1. 当前渲染模块到底是怎么跑起来的。
  2. RenderGraph 在你这个项目里到底是什么,不是什么。
  3. 现在的 BuiltinForwardPipelineScriptableRenderPipelineHost、managed C# 代码分别做到哪一步了。
  4. 下一步为什么应该先做 SRP runtime而不是直接开做一个“URP 包”。

这篇文档只按你仓库当前代码说话,不按理想状态脑补。


先记住 8 句话

  1. 你现在这套渲染模块,已经不是“一个大函数直接画完一帧”的老式写法了。
  2. 它现在的主链是:Scene -> CameraRenderRequest -> CameraFramePlan -> RenderGraph Recording -> RenderGraph Compile -> RenderGraph Execute
  3. SceneRenderer 不负责画东西,它更像一个总调度入口。
  4. CameraFramePlan 是当前渲染层里非常关键的“每相机执行计划”。
  5. RenderGraph 现在已经是可用的原生图系统,但它还是轻量版,不是完整大而全的工业终态。
  6. BuiltinForwardPipeline 已经不只是“直接执行 forward”它还会把 MainScene 阶段录进 RenderGraph
  7. ScriptableRenderPipelineHost 已经是 SRP 的 native 接缝,但 managed C# 那边现在还只是骨架,不是完整 runtime。
  8. 你接下来最该做的,不是马上做 URP 包,而是把 managed SRP runtime 真正打通。

一眼看懂当前整条主链

你现在看到的不是一堆零散类,而是一条完整主链:

SceneRenderer
  -> SceneRenderRequestPlanner
  -> RenderPipelineHost
      -> CameraFramePlanBuilder
      -> CameraRenderer
          -> DirectionalShadowRuntime
          -> RenderSceneExtractor
          -> ExecuteCameraFrameRenderGraphPlan
              -> RecordCameraFrameRenderGraphStages
                  -> 每个 Stage 录制到 RenderGraph
              -> RenderGraphCompiler::Compile
              -> RenderGraphExecutor::Execute

如果你脑子里先有这张图,后面很多类名就不会看乱。


当前模块大致怎么分层

你当前 Rendering 目录,职责大致已经分出来了:

  • Execution/一帧怎么跑stage 怎么调度graph 怎么录。
  • Planning/:场景相机请求怎么变成可执行的 frame plan。
  • Graph/RenderGraph 本体,包含 builder / compiler / executor / blackboard。
  • Pipelines/:具体渲染管线实现,以及面向 SRP 的 host。
  • Passes/:具体 render pass。
  • Features/更接近“renderer feature”的注入式效果。
  • Shadow/:阴影运行时和数据。
  • Extraction/:从场景抽取 RenderSceneData
  • FrameData/:一帧里绘制所需的可见物体、相机、光照等结构化数据。

所以从“模块分工”角度说,你现在已经不是一坨文件混在一起了。真正复杂的地方,不是有没有分目录,而是“调度关系”和“职责边界”能不能持续稳定。


1. 一帧是从哪里开始的

1.1 SceneRenderer 是总入口,但它不负责具体绘制

先看头文件:

class SceneRenderer {
public:
    SceneRenderer();
    explicit SceneRenderer(std::unique_ptr<RenderPipeline> pipeline);
    explicit SceneRenderer(std::shared_ptr<const RenderPipelineAsset> pipelineAsset);
    ~SceneRenderer();

    void SetPipeline(std::unique_ptr<RenderPipeline> pipeline);
    void SetPipelineAsset(std::shared_ptr<const RenderPipelineAsset> pipelineAsset);
    RenderPipeline* GetPipeline() const { return m_pipelineHost.GetPipeline(); }
    const RenderPipelineAsset* GetPipelineAsset() const { return m_pipelineHost.GetPipelineAsset(); }

    std::vector<CameraFramePlan> BuildFramePlans(
        const Components::Scene& scene,
        Components::CameraComponent* overrideCamera,
        const RenderContext& context,
        const RenderSurface& surface);

    bool Render(const CameraFramePlan& plan);
    bool Render(const std::vector<CameraFramePlan>& plans);
    bool Render(
        const Components::Scene& scene,
        Components::CameraComponent* overrideCamera,
        const RenderContext& context,
        const RenderSurface& surface);

    SceneRenderRequestPlanner m_requestPlanner;
    RenderPipelineHost m_pipelineHost;
};

再看实现:

std::vector<CameraFramePlan> SceneRenderer::BuildFramePlans(
    const Components::Scene& scene,
    Components::CameraComponent* overrideCamera,
    const RenderContext& context,
    const RenderSurface& surface) {
    const std::vector<CameraRenderRequest> requests =
        m_requestPlanner.BuildRequests(
            scene,
            overrideCamera,
            context,
            surface,
            m_pipelineHost.GetPipelineAsset());
    return m_pipelineHost.BuildFramePlans(requests);
}

bool SceneRenderer::Render(const CameraFramePlan& plan) {
    return m_pipelineHost.Render(plan);
}

bool SceneRenderer::Render(const std::vector<CameraFramePlan>& plans) {
    return m_pipelineHost.Render(plans);
}

bool SceneRenderer::Render(
    const Components::Scene& scene,
    Components::CameraComponent* overrideCamera,
    const RenderContext& context,
    const RenderSurface& surface) {
    return Render(BuildFramePlans(scene, overrideCamera, context, surface));
}

这段代码表达得很清楚:

  • SceneRenderer 不直接发 draw call。
  • 它先让 SceneRenderRequestPlanner 生成每个相机的 CameraRenderRequest
  • 再让 RenderPipelineHost 把 request 变成 CameraFramePlan
  • 最后再交给 RenderPipelineHost 执行。

所以你现在的最顶层入口,已经是“先规划,再执行”,而不是“看到相机就直接渲染”。


2. CameraRenderRequestCameraFramePlan:当前渲染层最关键的两层数据

很多新手一开始会把这两个结构看成差不多,其实不是。

  • CameraRenderRequest 更像“想渲什么”。
  • CameraFramePlan 更像“这一帧这个相机最终准备怎么渲”。

2.1 CameraRenderRequest:来自场景和相机的原始请求

struct CameraRenderRequest {
    const Components::Scene* scene = nullptr;
    Components::CameraComponent* camera = nullptr;
    RenderContext context;
    RenderSurface surface;
    DepthOnlyRenderRequest depthOnly;
    ShadowCasterRenderRequest shadowCaster;
    DirectionalShadowRenderPlan directionalShadow;
    PostProcessRenderRequest postProcess;
    FinalOutputRenderRequest finalOutput;
    ResolvedFinalColorPolicy finalColorPolicy = {};
    ObjectIdRenderRequest objectId;
    float cameraDepth = 0.0f;
    uint8_t cameraStackOrder = 0;
    RenderClearFlags clearFlags = RenderClearFlags::All;
    bool hasClearColorOverride = false;
    Math::Color clearColorOverride = Math::Color::Black();
    RenderPassSequence* preScenePasses = nullptr;
    RenderPassSequence* postScenePasses = nullptr;
    RenderPassSequence* overlayPasses = nullptr;

    bool IsValid() const {
        return scene != nullptr &&
               camera != nullptr &&
               context.IsValid();
    }
};

这个结构里有几个重点:

  • 它已经把“主场景渲染、阴影、后处理、最终输出、ObjectId”等请求都装进来了。
  • 它还带了 RenderPassSequence*,说明当前系统已经支持在相机主流程周围挂自定义 pass 序列。
  • 这一步还是“请求”,还不是最终确定的图资源和 stage 输出关系。

2.2 SceneRenderRequestPlanner:先收集相机,再让 pipeline asset 改请求

std::vector<CameraRenderRequest> SceneRenderRequestPlanner::BuildRequests(
    const Components::Scene& scene,
    Components::CameraComponent* overrideCamera,
    const RenderContext& context,
    const RenderSurface& surface,
    const RenderPipelineAsset* pipelineAsset) const {
    std::vector<CameraRenderRequest> requests;
    const std::vector<Components::CameraComponent*> cameras =
        CollectCameras(scene, overrideCamera);

    size_t renderedBaseCameraCount = 0;
    for (Components::CameraComponent* camera : cameras) {
        CameraRenderRequest request;
        if (!SceneRenderRequestUtils::BuildCameraRenderRequest(
                scene,
                *camera,
                context,
                surface,
                renderedBaseCameraCount,
                requests.size(),
                request)) {
            continue;
        }

        if (pipelineAsset != nullptr) {
            pipelineAsset->ConfigureCameraRenderRequest(
                request,
                renderedBaseCameraCount,
                requests.size(),
                m_directionalShadowPlanningSettings);
        } else {
            ApplyDefaultRenderPipelineAssetCameraRenderRequestPolicy(
                request,
                renderedBaseCameraCount,
                requests.size(),
                m_directionalShadowPlanningSettings);
        }

        requests.push_back(request);
        if (camera->GetStackType() == Components::CameraStackType::Base) {
            ++renderedBaseCameraCount;
        }
    }

    return requests;
}

这段代码非常重要因为它已经体现出“pipeline asset 能参与 planning”这个 SRP 方向的关键思想:

  • 先有一个通用相机请求。
  • 然后交给 RenderPipelineAsset 做策略配置。
  • 如果没有自定义 asset就走默认策略。

这和 Unity 的思路是对的。未来 SRP/URP 的很多策略,本质上都应该发生在这一层或下一层,而不是散落到各种具体 pass 里。

2.3 RenderPipelineAsset 现在已经有两个很关键的 planning hook

class RenderPipelineAsset {
public:
    virtual ~RenderPipelineAsset() = default;

    virtual std::unique_ptr<RenderPipeline> CreatePipeline() const = 0;
    virtual void ConfigurePipeline(RenderPipeline&) const {}
    virtual void ConfigureCameraRenderRequest(
        CameraRenderRequest& request,
        size_t renderedBaseCameraCount,
        size_t renderedRequestCount,
        const DirectionalShadowPlanningSettings& directionalShadowSettings) const;
    virtual FinalColorSettings GetDefaultFinalColorSettings() const { return {}; }
    virtual void ConfigureCameraFramePlan(CameraFramePlan& plan) const;
};

实现也很关键:

void RenderPipelineAsset::ConfigureCameraRenderRequest(
    CameraRenderRequest& request,
    size_t renderedBaseCameraCount,
    size_t renderedRequestCount,
    const DirectionalShadowPlanningSettings& directionalShadowSettings) const {
    ApplyDefaultRenderPipelineAssetCameraRenderRequestPolicy(
        request,
        renderedBaseCameraCount,
        renderedRequestCount,
        directionalShadowSettings);
}

void RenderPipelineAsset::ConfigureCameraFramePlan(CameraFramePlan& plan) const {
    ApplyDefaultRenderPipelineAssetCameraFramePlanPolicy(
        plan,
        GetDefaultFinalColorSettings());
}

这说明你现在 native 侧已经不是“asset 只负责创建 pipeline”了。

它已经开始负责:

  • 配置 CameraRenderRequest
  • 配置 CameraFramePlan

这就是后面 SRP/URP asset 最该承接的东西。

2.4 CameraFramePlan:每相机最终执行计划

struct CameraFramePlan {
    static RenderSurface BuildGraphManagedIntermediateSurfaceTemplate(
        const RenderSurface& surface);

    CameraRenderRequest request = {};
    ShadowCasterRenderRequest shadowCaster = {};
    DirectionalShadowRenderPlan directionalShadow = {};
    PostProcessRenderRequest postProcess = {};
    FinalOutputRenderRequest finalOutput = {};
    ResolvedFinalColorPolicy finalColorPolicy = {};
    RenderPassSequence* preScenePasses = nullptr;
    RenderPassSequence* postScenePasses = nullptr;
    RenderPassSequence* overlayPasses = nullptr;
    CameraFrameColorChainPlan colorChain = {};
    RenderSurface graphManagedSceneSurface = {};

    static CameraFramePlan FromRequest(const CameraRenderRequest& request);

    bool IsValid() const;
    void ConfigureGraphManagedSceneSurface();
    void ClearOwnedPostProcessSequence();
    void SetOwnedPostProcessSequence(std::shared_ptr<RenderPassSequence> sequence);
    const std::shared_ptr<RenderPassSequence>& GetOwnedPostProcessSequence() const {
        return m_ownedPostProcessSequence;
    }
    void ClearOwnedFinalOutputSequence();
    void SetOwnedFinalOutputSequence(std::shared_ptr<RenderPassSequence> sequence);
    const std::shared_ptr<RenderPassSequence>& GetOwnedFinalOutputSequence() const {
        return m_ownedFinalOutputSequence;
    }
    bool UsesGraphManagedSceneColor() const;
    bool UsesGraphManagedOutputColor(CameraFrameStage stage) const;
    CameraFrameColorSource ResolveStageColorSource(CameraFrameStage stage) const;
    bool IsPostProcessStageValid() const;
    bool IsFinalOutputStageValid() const;
    bool HasFrameStage(CameraFrameStage stage) const;
    RenderPassSequence* GetPassSequence(CameraFrameStage stage) const;
    const CameraFrameFullscreenStagePlan* GetFullscreenStagePlan(CameraFrameStage stage) const;
    const FullscreenPassRenderRequest* GetFullscreenPassRequest(CameraFrameStage stage) const;
    const ScenePassRenderRequest* GetScenePassRequest(CameraFrameStage stage) const;
    const ObjectIdRenderRequest* GetObjectIdRequest(CameraFrameStage stage) const;
    const RenderSurface* GetSharedStageOutputSurface(CameraFrameStage stage) const;
    const RenderSurface& GetMainSceneSurface() const;
    const RenderSurface& GetFinalCompositedSurface() const;
    bool RequiresIntermediateSceneColor() const;

private:
    std::shared_ptr<RenderPassSequence> m_ownedPostProcessSequence = {};
    std::shared_ptr<RenderPassSequence> m_ownedFinalOutputSequence = {};
};

这个结构不是简单数据包,它其实定义了:

  • 本相机有哪些 stage。
  • 哪些 stage 是 fullscreen chain。
  • 主场景颜色是直接写到最终 surface还是先写到 graph-managed 中间纹理。
  • 后处理、最终输出分别从哪一个颜色源读取。

简单说,CameraFramePlan 已经是“这一帧这个相机的渲染组织图纸”。

2.5 后处理链现在已经不是硬编码死写,它是通过 plan 算出来的

void PlanCameraFrameFullscreenStages(CameraFramePlan& plan) {
    plan.ClearOwnedPostProcessSequence();
    plan.ClearOwnedFinalOutputSequence();

    if (plan.request.camera == nullptr ||
        plan.request.context.device == nullptr ||
        !HasValidColorTarget(plan.request.surface)) {
        return;
    }

    std::unique_ptr<RenderPassSequence> postProcessSequence =
        BuildCameraPostProcessPassSequence(plan.request.camera->GetPostProcessPasses());
    std::unique_ptr<RenderPassSequence> finalOutputSequence =
        BuildFinalColorPassSequence(plan.finalColorPolicy);

    const bool hasPostProcess =
        postProcessSequence != nullptr && postProcessSequence->GetPassCount() > 0u;
    const bool hasFinalOutput =
        finalOutputSequence != nullptr && finalOutputSequence->GetPassCount() > 0u;
    if (!hasPostProcess && !hasFinalOutput) {
        return;
    }

    if (plan.request.surface.GetSampleCount() > 1u) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "SceneRenderer fullscreen post-process/final-output chain currently requires a single-sample main scene surface");
        return;
    }

    if (hasPostProcess) {
        plan.SetOwnedPostProcessSequence(
            SharePassSequence(std::move(postProcessSequence)));
        plan.colorChain.usesGraphManagedSceneColor = true;
        plan.colorChain.postProcess.source = CameraFrameColorSource::MainSceneColor;
        plan.colorChain.postProcess.usesGraphManagedOutputColor = hasFinalOutput;
        if (!hasFinalOutput) {
            plan.postProcess.destinationSurface = plan.request.surface;
        }
    }

    if (hasFinalOutput) {
        plan.SetOwnedFinalOutputSequence(
            SharePassSequence(std::move(finalOutputSequence)));
        plan.colorChain.usesGraphManagedSceneColor = true;
        plan.colorChain.finalOutput.source =
            hasPostProcess
                ? CameraFrameColorSource::PostProcessColor
                : CameraFrameColorSource::MainSceneColor;
        plan.finalOutput.destinationSurface = plan.request.surface;
    }

    if (plan.UsesGraphManagedOutputColor(CameraFrameStage::MainScene)) {
        plan.ConfigureGraphManagedSceneSurface();
    }
}

这段代码说明当前系统已经有“颜色链规划”的概念:

  • 如果要后处理,主场景颜色不应该直接落到最终 backbuffer。
  • 它先变成 graph-managed scene color。
  • 后处理从 MainSceneColor 读。
  • 如果还有 final output则后处理结果也可以继续留在 graph-managed 输出里。

也就是说,你现在已经不是“后处理 pass 直接绑死到主渲染函数后面”了。

这对将来做 URP 风格 renderer 很重要。


3. 为什么要把一帧拆成 CameraFrameStage

3.1 现在的 stage 枚举

enum class CameraFrameStage : uint8_t {
    PreScenePasses,
    ShadowCaster,
    DepthOnly,
    MainScene,
    PostProcess,
    FinalOutput,
    ObjectId,
    PostScenePasses,
    OverlayPasses
};

enum class CameraFrameStageExecutionKind : uint8_t {
    Sequence,
    StandalonePass,
    MainScenePipeline
};

再看 stage 分类逻辑:

inline constexpr CameraFrameStageExecutionKind GetCameraFrameStageExecutionKind(
    CameraFrameStage stage) {
    switch (stage) {
    case CameraFrameStage::PreScenePasses:
    case CameraFrameStage::PostProcess:
    case CameraFrameStage::FinalOutput:
    case CameraFrameStage::PostScenePasses:
    case CameraFrameStage::OverlayPasses:
        return CameraFrameStageExecutionKind::Sequence;
    case CameraFrameStage::ShadowCaster:
    case CameraFrameStage::DepthOnly:
    case CameraFrameStage::ObjectId:
        return CameraFrameStageExecutionKind::StandalonePass;
    default:
        return CameraFrameStageExecutionKind::MainScenePipeline;
    }
}

这段代码特别值得新手反复看几遍。

它说明你现在的“每相机渲染流程”不是一串散装 if/else而是有明确 stage 类型的:

  • Sequence:一串 pass 序列。
  • StandalonePass:单个独立 pass比如阴影、深度、ObjectId。
  • MainScenePipeline:主场景阶段,由整个 pipeline 负责。

也就是说,MainScene 不是普通 pass它是“管线主体”。

这个抽象是对的。未来 SRP 的主入口,大概率也应该接在这里。

3.2 每个 stage 对 graph 的资源语义也不一样

inline constexpr bool DoesCameraFrameStageGraphOwnColorTransitions(
    CameraFrameStage stage) {
    return stage == CameraFrameStage::MainScene ||
           stage == CameraFrameStage::PostProcess ||
           stage == CameraFrameStage::FinalOutput ||
           stage == CameraFrameStage::ObjectId;
}

inline constexpr bool DoesCameraFrameStageGraphOwnDepthTransitions(
    CameraFrameStage stage) {
    return stage == CameraFrameStage::ShadowCaster ||
           stage == CameraFrameStage::DepthOnly ||
           stage == CameraFrameStage::MainScene ||
           stage == CameraFrameStage::ObjectId;
}

这说明 stage 不只是“逻辑分段”,还决定了 graph 是否接管状态转换。

换句话说,你现在的系统已经在 stage 层面开始表达资源所有权。


4. CameraRenderer:真正把一个 CameraFramePlan 跑起来的人

4.1 CameraRenderer::Render 的职责

bool CameraRenderer::Render(
    const CameraFramePlan& plan) {
    if (!plan.IsValid() || m_pipeline == nullptr) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: plan invalid or pipeline missing");
        return false;
    }

    const RenderSurface& mainSceneSurface = plan.GetMainSceneSurface();
    if (mainSceneSurface.GetRenderAreaWidth() == 0 ||
        mainSceneSurface.GetRenderAreaHeight() == 0) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: main scene surface render area is empty");
        return false;
    }
    if (plan.request.depthOnly.IsRequested() &&
        !plan.request.depthOnly.IsValid()) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: depth-only request invalid");
        return false;
    }
    if (plan.postProcess.IsRequested() &&
        !plan.IsPostProcessStageValid()) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: post-process request invalid");
        return false;
    }
    if (plan.UsesGraphManagedOutputColor(CameraFrameStage::MainScene) &&
        (m_pipeline == nullptr ||
         !m_pipeline->SupportsStageRenderGraph(CameraFrameStage::MainScene))) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: graph-managed main scene color requires pipeline main-scene render-graph support");
        return false;
    }
    if (plan.finalOutput.IsRequested() &&
        !plan.IsFinalOutputStageValid()) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: final-output request invalid");
        return false;
    }
    if (plan.request.objectId.IsRequested() &&
        !plan.request.objectId.IsValid()) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: object-id request invalid");
        return false;
    }

    DirectionalShadowExecutionState shadowState = {};
    if (m_directionalShadowRuntime == nullptr ||
        !m_directionalShadowRuntime->ResolveExecutionState(
            plan,
            *m_pipeline,
            shadowState)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: DirectionalShadowRuntime::ResolveExecutionState returned false");
        return false;
    }

    RenderSceneData sceneData = {};
    if (!BuildSceneDataForPlan(plan, shadowState, sceneData)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: BuildSceneDataForPlan returned false");
        return false;
    }

    if (!ExecuteRenderPlan(plan, shadowState, sceneData)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: ExecuteRenderPlan returned false");
        return false;
    }

    return true;
}

这段代码告诉你:

  • CameraRenderer 先做合法性检查。
  • 再解析阴影执行状态。
  • 再从场景抽取 RenderSceneData
  • 最后把 plan 丢给 RenderGraph 执行主链。

所以 CameraRenderer 的本质是:

把“某个相机的一帧计划”转换成“可执行的渲染数据和图执行流程”。

4.2 阴影不是直接写死在主渲染里,而是先解析成执行状态

bool DirectionalShadowRuntime::ResolveExecutionState(
    const CameraFramePlan& plan,
    const RenderPipeline& pipeline,
    DirectionalShadowExecutionState& outShadowState) {
    outShadowState = {};
    outShadowState.shadowCasterRequest = plan.shadowCaster;

    if (outShadowState.shadowCasterRequest.IsRequested()) {
        return outShadowState.shadowCasterRequest.IsValid();
    }

    if (!plan.directionalShadow.IsValid()) {
        return true;
    }

    const DirectionalShadowSurfaceAllocation* shadowAllocation =
        m_surfaceCache.Resolve(plan.request.context, plan.directionalShadow);
    if (shadowAllocation == nullptr || !shadowAllocation->IsValid()) {
        return false;
    }

    return pipeline.ConfigureDirectionalShadowExecutionState(
        plan,
        *shadowAllocation,
        outShadowState);
}

这说明当前阴影执行也已经不是“主渲染里顺手写几行阴影代码”。

它现在已经有:

  • DirectionalShadowRenderPlan
  • DirectionalShadowRuntime
  • DirectionalShadowExecutionState

这三个层次。

这也是为什么我一直认为你后面把阴影策略上移到 URP-like 包层是可行的,因为 native 侧已经在往“执行内核”和“组织策略”分离。


5. 什么是 RenderGraph

如果你是新手,我先不用术语,先用一句人话讲:

RenderGraph 就是“先声明这一帧要用哪些纹理、哪些 pass 读写它们,再由系统统一算顺序、生命周期、状态切换,最后统一执行”。

和传统写法的区别是:

  • 传统写法:你边想边画,顺手自己做 barrier、自己管中间 RT。
  • RenderGraph:你先把“这帧要发生什么”录下来,然后系统再编译执行。

5.1 你的 RenderGraph 现在的核心数据结构

class RenderGraph {
public:
    void Reset();

    size_t GetTextureCount() const {
        return m_textures.size();
    }

    size_t GetPassCount() const {
        return m_passes.size();
    }

private:
    struct TextureResource {
        Containers::String name;
        RenderGraphTextureDesc desc = {};
        RenderGraphTextureKind kind = RenderGraphTextureKind::Transient;
        RHI::RHIResourceView* importedView = nullptr;
        RenderGraphImportedTextureOptions importedOptions = {};
    };

    struct TextureAccess {
        RenderGraphTextureHandle texture = {};
        RenderGraphAccessMode mode = RenderGraphAccessMode::Read;
        RenderGraphTextureAspect aspect = RenderGraphTextureAspect::Color;
    };

    struct PassNode {
        Containers::String name;
        RenderGraphPassType type = RenderGraphPassType::Raster;
        std::vector<TextureAccess> accesses;
        RenderGraphExecuteCallback executeCallback = {};
    };

    std::vector<TextureResource> m_textures;
    std::vector<PassNode> m_passes;
};

这段代码已经把你的 graph 本质暴露得很清楚了:

  • 图里现在的资源对象主要是 Texture
  • 资源分两类:ImportedTransient
  • pass 记录的是“访问声明”和“执行回调”。

翻成人话:

  • Imported:外面已经存在的纹理,比如 swapchain/backbuffer、已有 depth。
  • Transient:这帧 graph 临时创建的中间纹理。

5.2 RenderGraph 的 handle 和描述结构

struct RenderGraphTextureHandle {
    Core::uint32 index = kInvalidRenderGraphHandle;

    bool IsValid() const {
        return index != kInvalidRenderGraphHandle;
    }
};

struct RenderGraphTextureDesc {
    Core::uint32 width = 0u;
    Core::uint32 height = 0u;
    Core::uint32 format = static_cast<Core::uint32>(RHI::Format::Unknown);
    Core::uint32 textureType = static_cast<Core::uint32>(RHI::TextureType::Texture2D);
    Core::uint32 sampleCount = 1u;
    Core::uint32 sampleQuality = 0u;

    bool IsValid() const {
        return width > 0u &&
               height > 0u &&
               format != static_cast<Core::uint32>(RHI::Format::Unknown) &&
               sampleCount > 0u;
    }
};

struct RenderGraphImportedTextureOptions {
    RHI::ResourceStates initialState = RHI::ResourceStates::Common;
    RHI::ResourceStates finalState = RHI::ResourceStates::Common;
    bool graphOwnsTransitions = false;
};

这里有两个关键点:

  1. graph 对资源的引用是 handle不是裸指针到处飞。
  2. imported 资源可以声明“graph 是否接管状态转换”。

这就是为什么你现在可以把一部分旧 surface 继续接进来,同时又让 graph 接管新建 transient 纹理。

5.3 builder API先记录不立即执行

class RenderGraphPassBuilder {
public:
    void ReadTexture(RenderGraphTextureHandle texture);
    void WriteTexture(RenderGraphTextureHandle texture);
    void ReadDepthTexture(RenderGraphTextureHandle texture);
    void WriteDepthTexture(RenderGraphTextureHandle texture);
    void SetExecuteCallback(RenderGraphExecuteCallback callback);
};

class RenderGraphBuilder {
public:
    explicit RenderGraphBuilder(RenderGraph& graph)
        : m_graph(graph) {
    }

    void Reset();

    RenderGraphTextureHandle ImportTexture(
        const Containers::String& name,
        const RenderGraphTextureDesc& desc,
        RHI::RHIResourceView* importedView = nullptr,
        const RenderGraphImportedTextureOptions& importedOptions = {});

    RenderGraphTextureHandle CreateTransientTexture(
        const Containers::String& name,
        const RenderGraphTextureDesc& desc);

    RenderGraphPassHandle AddRasterPass(
        const Containers::String& name,
        const std::function<void(RenderGraphPassBuilder&)>& setup);

    RenderGraphPassHandle AddComputePass(
        const Containers::String& name,
        const std::function<void(RenderGraphPassBuilder&)>& setup);
};

实现也很直接:

RenderGraphPassHandle RenderGraphBuilder::AddPass(
    const Containers::String& name,
    RenderGraphPassType type,
    const std::function<void(RenderGraphPassBuilder&)>& setup) {
    RenderGraph::PassNode pass = {};
    pass.name = name;
    pass.type = type;
    m_graph.m_passes.push_back(pass);

    RenderGraphPassHandle handle = {};
    handle.index = static_cast<Core::uint32>(m_graph.m_passes.size() - 1u);

    if (setup) {
        RenderGraphPassBuilder passBuilder(&m_graph, handle);
        setup(passBuilder);
    }

    return handle;
}

这说明 RenderGraphBuilder 现在干的事很朴素:

  • 建一个 pass 节点。
  • 在 setup 里登记读写依赖。
  • 记录执行回调。

这里没有魔法。

5.4 compiler 做了什么

RenderGraphCompiler 的核心不是“生成 draw call”而是

  1. 校验资源描述是否合法。
  2. 根据读写关系建立 pass 依赖。
  3. 拓扑排序。
  4. 计算每个纹理的生命周期。
  5. 计算每次访问需要的资源状态。
  6. 给出状态转换计划。

下面这一段是最核心的依赖构建逻辑:

for (Core::uint32 passIndex = 0u; passIndex < static_cast<Core::uint32>(passCount); ++passIndex) {
    const RenderGraph::PassNode& pass = graph.m_passes[passIndex];
    for (const RenderGraph::TextureAccess& access : pass.accesses) {
        if (!access.texture.IsValid() || access.texture.index >= textureCount) {
            WriteError(
                Containers::String("RenderGraph pass '") + pass.name +
                    "' references an invalid texture handle",
                outErrorMessage);
            return false;
        }

        const RenderGraph::TextureResource& texture = graph.m_textures[access.texture.index];
        std::vector<Core::uint32>& readers = lastReaders[access.texture.index];
        Core::uint32& writer = lastWriter[access.texture.index];

        if (access.mode == RenderGraphAccessMode::Read) {
            if (texture.kind == RenderGraphTextureKind::Transient &&
                writer == kInvalidRenderGraphHandle) {
                WriteError(
                    Containers::String("RenderGraph transient texture '") + texture.name +
                        "' is read before any pass writes it",
                    outErrorMessage);
                return false;
            }

            addEdge(writer, passIndex);
            addUniqueReader(readers, passIndex);
            continue;
        }

        addEdge(writer, passIndex);
        for (Core::uint32 readerPassIndex : readers) {
            addEdge(readerPassIndex, passIndex);
        }
        readers.clear();
        writer = passIndex;
    }
}

它的意思非常直白:

  • 读一个纹理,就依赖上一次写它的 pass。
  • 写一个纹理,就依赖上一次写它的 pass也依赖所有还没被新写覆盖的 reader。

这就是一个最基础但正确的读写依赖模型。

然后做拓扑排序:

while (executionOrder.size() < passCount) {
    bool progressed = false;
    for (Core::uint32 passIndex = 0u; passIndex < static_cast<Core::uint32>(passCount); ++passIndex) {
        if (emitted[passIndex] || incomingEdgeCount[passIndex] != 0u) {
            continue;
        }

        emitted[passIndex] = true;
        executionOrder.push_back(passIndex);
        for (Core::uint32 dependentPassIndex : outgoingEdges[passIndex]) {
            if (incomingEdgeCount[dependentPassIndex] > 0u) {
                --incomingEdgeCount[dependentPassIndex];
            }
        }

        progressed = true;
        break;
    }

    if (!progressed) {
        WriteError(
            "RenderGraph failed to compile because pass dependencies contain a cycle",
            outErrorMessage);
        outCompiledGraph.Reset();
        return false;
    }
}

如果你是新手,你只要记住:

编译器负责把“记录顺序”变成“正确执行顺序”。

5.5 executor 做了什么

执行器的主函数不长:

bool RenderGraphExecutor::Execute(
    const CompiledRenderGraph& graph,
    const RenderContext& renderContext,
    Containers::String* outErrorMessage) {
    if (outErrorMessage != nullptr) {
        outErrorMessage->Clear();
    }

    RenderGraphRuntimeResources runtimeResources(graph);
    if (!runtimeResources.Initialize(renderContext, outErrorMessage)) {
        return false;
    }

    RenderGraphExecutionContext executionContext = {
        renderContext,
        &runtimeResources
    };
    for (const CompiledRenderGraph::CompiledPass& pass : graph.m_passes) {
        if (!runtimeResources.TransitionPassResources(pass, renderContext, outErrorMessage)) {
            return false;
        }

        if (pass.executeCallback) {
            pass.executeCallback(executionContext);
        }
    }

    if (!runtimeResources.TransitionGraphOwnedImportsToFinalStates(
            renderContext,
            outErrorMessage)) {
        return false;
    }

    return true;
}

它做三件事:

  1. 初始化 runtime 资源。
  2. 每个 pass 执行前先做资源状态切换。
  3. 执行 pass callback。
  4. 结束时把 graph 接管的 imported 资源转到声明的最终状态。

而 transient 纹理的创建逻辑在这里:

if (texture.kind != RenderGraphTextureKind::Transient || !lifetime.used) {
    continue;
}

if (renderContext.device == nullptr) {
    if (outErrorMessage != nullptr) {
        *outErrorMessage =
            Containers::String("RenderGraph cannot allocate transient texture without a valid device: ") +
            texture.name;
    }
    Reset();
    return false;
}

if (!CreateTransientTexture(
        renderContext,
        static_cast<Core::uint32>(textureIndex),
        texture,
        m_textureAllocations[textureIndex])) {
    if (outErrorMessage != nullptr) {
        *outErrorMessage =
            Containers::String("RenderGraph failed to allocate transient texture: ") +
            texture.name;
    }
    Reset();
    return false;
}

所以你现在这个 graph 已经真的会:

  • 分配 transient RT
  • 创建 RTV/DSV/SRV/UAV
  • 自动做 barrier
  • 执行 pass callback

它不是“只有接口壳子”。

5.6 RenderGraphBlackboard 是什么

class RenderGraphBlackboard {
public:
    template <typename T, typename... Args>
    T& Emplace(Args&&... args) {
        using StorageType = std::remove_cv_t<std::remove_reference_t<T>>;
        auto value = std::make_shared<StorageType>(std::forward<Args>(args)...);
        StorageType& reference = *value;
        m_entries[std::type_index(typeid(StorageType))] = std::move(value);
        return reference;
    }

    template <typename T>
    T* TryGet() {
        using StorageType = std::remove_cv_t<std::remove_reference_t<T>>;
        const auto entryIt = m_entries.find(std::type_index(typeid(StorageType)));
        return entryIt != m_entries.end()
            ? static_cast<StorageType*>(entryIt->second.get())
            : nullptr;
    }

    void Clear() {
        m_entries.clear();
    }

private:
    std::unordered_map<std::type_index, std::shared_ptr<void>> m_entries;
};

你可以把 blackboard 理解成:

这一帧 graph 录制期间的“共享笔记本”。

谁往里写?

  • 某个 stage 把自己产出的颜色、深度、阴影贴图 handle 发布进去。

谁从里读?

  • 后面的 stage、feature、pipeline graph builder。

5.7 现在这个 RenderGraph 已经做了什么,还没做什么

已经做了:

  • texture import / transient texture
  • raster / compute pass 记录
  • 读写依赖分析
  • 拓扑排序
  • 生命周期计算
  • 状态切换
  • transient 资源创建
  • blackboard

还没做或者还很轻量的:

  • buffer 资源图管理
  • pass culling
  • 资源别名复用
  • barrier batching/优化
  • async compute / multi-queue
  • subresource 级别状态跟踪
  • 更强的 lifetime aliasing 优化

所以当前结论应该很准确:

你已经有一个可用的 native RenderGraph 内核,但它还是 v1而不是终态。


6. 当前项目是怎么把一帧相机录进 RenderGraph

这一段是你当前渲染模块最核心、也最容易把新人看晕的地方。

我把它拆成 5 步讲。

6.1 第一步:先创建 graph再录制所有 stage

bool ExecuteCameraFrameRenderGraphPlan(
    const CameraFramePlan& plan,
    const DirectionalShadowExecutionState& shadowState,
    const RenderSceneData& sceneData,
    RenderPipeline* pipeline) {
    RenderGraph graph = {};
    RenderGraphBuilder graphBuilder(graph);
    RenderGraphBlackboard blackboard = {};

    CameraFrameExecutionState executionState = {};
    executionState.pipeline = pipeline;

    bool stageExecutionSucceeded = true;
    if (!RecordCameraFrameRenderGraphStages(
        plan,
        shadowState,
        sceneData,
        executionState,
        graphBuilder,
        blackboard,
        stageExecutionSucceeded)) {
        return false;
    }

    CompiledRenderGraph compiledGraph = {};
    Containers::String errorMessage;
    if (!RenderGraphCompiler::Compile(graph, compiledGraph, &errorMessage)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            Containers::String("CameraRenderer::Render failed: RenderGraph compile failed: ") +
                errorMessage);
        return false;
    }

    if (!RenderGraphExecutor::Execute(compiledGraph, plan.request.context, &errorMessage)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            Containers::String("CameraRenderer::Render failed: RenderGraph execute failed: ") +
                errorMessage);
        return false;
    }

    return stageExecutionSucceeded;
}

所以当前每个相机的执行过程,其实就是:

  1. 建图。
  2. 让各 stage 往图里录。
  3. 编译。
  4. 执行。

6.2 第二步:按 stage 顺序录

bool RecordCameraFrameRenderGraphStages(
    const CameraFramePlan& plan,
    const DirectionalShadowExecutionState& shadowState,
    const RenderSceneData& sceneData,
    CameraFrameExecutionState& executionState,
    RenderGraphBuilder& graphBuilder,
    RenderGraphBlackboard& blackboard,
    bool& stageExecutionSucceeded) {
    CameraFrameRenderGraphFrameData& frameData =
        EmplaceCameraFrameRenderGraphFrameData(blackboard);
    RenderGraphImportedTextureRegistry importedTextures = {};
    CameraFrameRenderGraphBuilderContext builderContext = {
        graphBuilder,
        blackboard,
        frameData,
        importedTextures,
        executionState,
        stageExecutionSucceeded
    };
    const CameraFrameRenderGraphStageContext context = {
        plan,
        shadowState,
        sceneData,
        builderContext
    };
    for (const CameraFrameStageInfo& stageInfo : kOrderedCameraFrameStages) {
        if (!plan.HasFrameStage(stageInfo.stage)) {
            continue;
        }

        if (!RecordCameraFrameRenderGraphStage(stageInfo.stage, context)) {
            return false;
        }
    }

    return true;
}

注意这里不是“看谁想录就录”,而是严格按 kOrderedCameraFrameStages 顺序。

这意味着:

  • stage 顺序是引擎定义的 frame contract。
  • 各 stage 再在内部决定自己是 sequence、standalone pass、pipeline graph 还是 fallback。

6.3 第三步:每个 stage 先建立自己的 graph build state

CameraFrameStageGraphBuildState BuildCameraFrameStageGraphBuildState(
    CameraFrameStage stage,
    const CameraFrameRenderGraphStageContext& context) {
    CameraFrameStageGraphBuildState stageState = {};
    stageState.stage = stage;
    stageState.stageName = Containers::String(GetCameraFrameStageName(stage));
    stageState.stageSequence = context.plan.GetPassSequence(stage);

    const RenderPassContext stagePassContext =
        BuildCameraFrameStagePassContext(
            stage,
            context.plan,
            context.shadowState,
            context.sceneData);
    stageState.surfaceTemplate = stagePassContext.surface;
    stageState.hasSourceSurface = stagePassContext.sourceSurface != nullptr;
    if (stageState.hasSourceSurface) {
        stageState.sourceSurfaceTemplate = *stagePassContext.sourceSurface;
    }
    stageState.sourceColorView = stagePassContext.sourceColorView;
    stageState.sourceColorState = stagePassContext.sourceColorState;
    stageState.sourceSurface =
        ImportRenderGraphSurface(
            context.builder.graphBuilder,
            context.builder.importedTextures,
            stageState.stageName + ".Source",
            stagePassContext.sourceSurface,
            RenderGraphSurfaceImportUsage::Source,
            IsCameraFrameFullscreenSequenceStage(stage));
    stageState.outputSurface =
        ImportRenderGraphSurface(
            context.builder.graphBuilder,
            context.builder.importedTextures,
            stageState.stageName + ".Output",
            &stagePassContext.surface,
            RenderGraphSurfaceImportUsage::Output,
            DoesCameraFrameStageGraphOwnColorTransitions(stage),
            DoesCameraFrameStageGraphOwnDepthTransitions(stage));
    stageState.outputColor =
        ResolveStageOutputColorHandle(
            stage,
            context.plan,
            stageState.stageName,
            stagePassContext,
            stageState.outputSurface,
            context.builder.graphBuilder);
    return stageState;
}

这一步你一定要看懂,因为它解释了当前 graph 录制时的资源来源:

  • sourceSurface:这个 stage 读谁。
  • outputSurface:这个 stage 写谁。
  • outputColor:这个 stage 的主颜色输出 handle。

outputColor 不一定是 imported 的 surface color它也可能是 graph 新建的 transient texture。

6.4 第四步stage 先发布资源,再决定怎么录

bool RecordCameraFrameRenderGraphStage(
    CameraFrameStage stage,
    const CameraFrameRenderGraphStageContext& context) {
    const CameraFrameStageGraphBuildState stageState =
        BuildCameraFrameStageGraphBuildState(
            stage,
            context);
    PublishCameraFrameStageGraphResources(stageState, context);

    for (CameraFrameStageRecordHandler handler : kCameraFrameStageRecordHandlers) {
        bool stageHandled = false;
        if (!handler(stageState, context, stageHandled)) {
            return false;
        }
        if (stageHandled) {
            return true;
        }
    }

    AddCameraFrameStageFallbackRasterPass(stageState, context);
    return true;
}

这里的核心思想是:

  1. 先把资源语义建立起来。
  2. 再看这个 stage 是由哪种方式录入 graph。
  3. 如果都不支持,就走 fallback raster pass adapter。

这是一种很好的“渐进式迁移”结构。

6.5 第五步:现在一共有 4 种录制路径

路径 Asequence stage

bool TryRecordCameraFrameStageSequence(
    const CameraFrameStageGraphBuildState& stageState,
    const CameraFrameRenderGraphStageContext& context,
    bool& handled) {
    CameraFrameRenderGraphBuilderContext& builder = context.builder;
    if (stageState.stageSequence == nullptr) {
        handled = false;
        return true;
    }

    handled = true;
    const CameraFrameRenderGraphSourceBinding sourceBinding =
        BuildCameraFrameStageGraphSourceBinding(stageState);
    const bool recordResult =
        IsCameraFrameFullscreenSequenceStage(stageState.stage)
            ? [&]() {
                RenderGraphTextureHandle currentSourceColor = {};
                const CameraFrameRenderGraphSourceBinding fullscreenBinding =
                    ResolveCameraFrameFullscreenStageGraphSourceBinding(
                        context.plan,
                        stageState.stage,
                        stageState.surfaceTemplate,
                        sourceBinding.sourceSurfaceTemplate,
                        sourceBinding.sourceColorView,
                        sourceBinding.sourceColorState,
                        sourceBinding.sourceColor,
                        &builder.blackboard);
                currentSourceColor = fullscreenBinding.sourceColor;
                return RecordStageSequencePasses(
                    stageState.stage,
                    stageState.stageName,
                    stageState.stageSequence,
                    builder.executionState,
                    context.plan.request.context,
                    builder.stageExecutionSucceeded,
                    [&context, &stageState, &fullscreenBinding, &currentSourceColor](
                        RenderPass& pass,
                        size_t passIndex,
                        const Containers::String& passName,
                        const RenderPassGraphBeginCallback& beginSequencePass) {
                        return RecordCameraFrameFullscreenSequenceStageGraphPass(
                            context,
                            stageState,
                            passName,
                            fullscreenBinding,
                            stageState.outputColor,
                            passIndex,
                            stageState.stageSequence->GetPassCount(),
                            currentSourceColor,
                            beginSequencePass,
                            pass);
                    });
            }()
            : RecordStageSequencePasses(
                stageState.stage,
                stageState.stageName,
                stageState.stageSequence,
                builder.executionState,
                context.plan.request.context,
                builder.stageExecutionSucceeded,
                [&context, &stageState](
                    RenderPass& pass,
                    size_t,
                    const Containers::String& passName,
                    const RenderPassGraphBeginCallback& beginSequencePass) {
                    return RecordCameraFrameRegularSequenceStageRenderGraphPass(
                        context,
                        stageState,
                        passName,
                        stageState.outputSurface,
                        beginSequencePass,
                        pass);
                });
    if (!recordResult) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            Containers::String("CameraRenderer::Render failed: pass-sequence graph recording returned false for ") +
                stageState.stageName);
        return false;
    }

    return true;
}

这段代码有两个重点:

  1. 普通 sequence 和 fullscreen sequence 是分开处理的。
  2. fullscreen sequence 会维护 currentSourceColor,也就是“上一段 pass 的输出接下一段 pass 的输入”。

这就是后处理链在 graph 里的真正组织方式。

路径 Bstandalone render pass

bool TryRecordCameraFrameStageStandaloneRenderGraphPass(
    const CameraFrameStageGraphBuildState& stageState,
    const CameraFrameRenderGraphStageContext& context,
    bool& handled) {
    CameraFrameRenderGraphBuilderContext& builder = context.builder;
    RenderPass* const standaloneStagePass =
        ResolveCameraFrameStandaloneStagePass(
            stageState.stage,
            builder.executionState);
    if (standaloneStagePass == nullptr ||
        !standaloneStagePass->SupportsRenderGraph()) {
        handled = false;
        return true;
    }

    handled = true;
    const RenderSceneData stageSceneData =
        BuildCameraFrameStandaloneStageSceneData(
            stageState.stage,
            context,
            stageState.surfaceTemplate);
    if (!RecordCameraFrameStandaloneStageRenderGraphPass(
            context,
            stageState,
            stageSceneData,
            *standaloneStagePass,
            builder.stageExecutionSucceeded)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            Containers::String("CameraRenderer::Render failed: RenderPass::RecordRenderGraph returned false for ") +
                stageState.stageName);
        return false;
    }

    return true;
}

这条路径主要对应:

  • ShadowCaster
  • DepthOnly
  • ObjectId

它们通常不是整个 pipeline 主体,但可以由某个单独 pass 实现。

路径 Cpipeline stage graph

bool TryRecordCameraFramePipelineStageGraphPass(
    const CameraFrameStageGraphBuildState& stageState,
    const CameraFrameRenderGraphStageContext& context,
    bool& handled) {
    CameraFrameRenderGraphBuilderContext& builder = context.builder;
    if (!SupportsCameraFramePipelineGraphRecording(stageState.stage) ||
        builder.executionState.pipeline == nullptr ||
        !builder.executionState.pipeline->SupportsStageRenderGraph(
            stageState.stage)) {
        handled = false;
        return true;
    }

    handled = true;
    if (!RecordCameraFramePipelineStageGraphPass(
            context,
            stageState,
            *builder.executionState.pipeline)) {
        Debug::Logger::Get().Error(
            Debug::LogCategory::Rendering,
            "CameraRenderer::Render failed: RenderPipeline::RecordStageRenderGraph returned false");
        return false;
    }

    return true;
}

当前只有 MainScene 会走这条路径。

这也是你现在 native SRP 接缝最关键的落点。

路径 Dfallback raster pass adapter

void AddCameraFrameStageFallbackRasterPass(
    const CameraFrameStageGraphBuildState& stageState,
    const CameraFrameRenderGraphStageContext& context) {
    CameraFrameRenderGraphBuilderContext& builder = context.builder;
    const CameraFrameStageGraphBuildState capturedStageState = stageState;
    const CameraFrameRenderGraphStageContext capturedContext = context;
    CameraFrameExecutionState* const executionState = &builder.executionState;
    bool* const stageExecutionSucceeded = &builder.stageExecutionSucceeded;
    builder.graphBuilder.AddRasterPass(
        capturedStageState.stageName,
        [capturedStageState, capturedContext, executionState, stageExecutionSucceeded](
            RenderGraphPassBuilder& passBuilder) {
            RecordCameraFrameStageFallbackPassIO(
                capturedStageState,
                passBuilder);
            passBuilder.SetExecuteCallback(
                [capturedStageState, capturedContext, executionState, stageExecutionSucceeded](
                    const RenderGraphExecutionContext& executionContext) {
                    if (!*stageExecutionSucceeded) {
                        return;
                    }

                    *stageExecutionSucceeded =
                        ExecuteCameraFrameStageFallbackPass(
                            capturedStageState,
                            capturedContext,
                            *executionState,
                            executionContext);
                });
        });
}

这个 fallback 很重要,它意味着:

即使某个 pass 还没完全 graph-native 化,也能先挂进整个相机级别的 RenderGraph 主链里。

这个设计非常实用,因为它支持渐进重构,而不是一刀切重写所有 pass。


7. RenderPassGraphContract:旧式 pass 如何接进 graph

很多人第一次看 graph 系统时会有一个问题:

如果老 pass 只有 Execute(),那它怎么进 graph

答案就是这个 adapter 层。

7.1 RenderPass 接口本身同时支持立即执行和 graph 录制

class RenderPass {
public:
    virtual ~RenderPass() = default;

    virtual const char* GetName() const = 0;

    virtual bool Initialize(const RenderContext&) {
        return true;
    }

    virtual void Shutdown() {
    }

    virtual bool SupportsRenderGraph() const {
        return false;
    }

    virtual bool RecordRenderGraph(
        const RenderPassRenderGraphContext&) {
        return false;
    }

    virtual bool Execute(const RenderPassContext& context) = 0;
};

这个接口设计得很关键:

  • 老 pass 只实现 Execute() 也能活。
  • 新 pass 可以实现 RecordRenderGraph()
  • 引擎中间层可以决定走哪条路径。

7.2 adapter 的核心逻辑

bool RecordCallbackRasterRenderPass(
    const RenderPassRenderGraphContext& context,
    const RenderPassGraphIO& io,
    RenderPassGraphExecutePassCallback executePassCallback,
    std::vector<RenderGraphTextureHandle> additionalReadTextures) {
    if (!executePassCallback) {
        return false;
    }

    const Containers::String passName = context.passName;
    const RenderContext renderContext = context.renderContext;
    const std::shared_ptr<const RenderSceneData> sceneData =
        std::make_shared<RenderSceneData>(context.sceneData);
    const RenderSurface surface = context.surface;
    const bool hasSourceSurface = context.sourceSurface != nullptr;
    const RenderSurface sourceSurface =
        hasSourceSurface ? *context.sourceSurface : RenderSurface();
    RHI::RHIResourceView* const sourceColorView = context.sourceColorView;
    const RHI::ResourceStates sourceColorState = context.sourceColorState;
    const RenderGraphTextureHandle sourceColorTexture = context.sourceColorTexture;
    const std::vector<RenderGraphTextureHandle> colorTargets = context.colorTargets;
    const RenderGraphTextureHandle depthTarget = context.depthTarget;
    bool* const executionSucceeded = context.executionSucceeded;
    const RenderPassGraphBeginCallback beginPassCallback = context.beginPassCallback;
    const RenderPassGraphEndCallback endPassCallback = context.endPassCallback;
    context.graphBuilder.AddRasterPass(
        passName,
        [renderContext,
         sceneData,
         surface,
         hasSourceSurface,
         sourceSurface,
         sourceColorView,
         sourceColorState,
         sourceColorTexture,
         colorTargets,
         depthTarget,
         executionSucceeded,
         beginPassCallback,
         endPassCallback,
         executePassCallback,
         additionalReadTextures,
         io](
            RenderGraphPassBuilder& passBuilder) {
            if (io.readSourceColor && sourceColorTexture.IsValid()) {
                passBuilder.ReadTexture(sourceColorTexture);
            }

            for (RenderGraphTextureHandle readTexture : additionalReadTextures) {
                if (readTexture.IsValid()) {
                    passBuilder.ReadTexture(readTexture);
                }
            }

            if (io.writeColor) {
                for (RenderGraphTextureHandle colorTarget : colorTargets) {
                    if (colorTarget.IsValid()) {
                        passBuilder.WriteTexture(colorTarget);
                    }
                }
            }

            if (io.writeDepth && depthTarget.IsValid()) {
                passBuilder.WriteDepthTexture(depthTarget);
            }

            passBuilder.SetExecuteCallback(
                [renderContext,
                 sceneData,
                 surface,
                 hasSourceSurface,
                 sourceSurface,
                 sourceColorView,
                 sourceColorState,
                 sourceColorTexture,
                 colorTargets,
                 depthTarget,
                 executionSucceeded,
                 beginPassCallback,
                 endPassCallback,
                 executePassCallback,
                 io](
                    const RenderGraphExecutionContext& executionContext) {
                    const RenderSurface* resolvedSourceSurface =
                        hasSourceSurface ? &sourceSurface : nullptr;
                    RHI::RHIResourceView* resolvedSourceColorView = sourceColorView;
                    RHI::ResourceStates resolvedSourceColorState = sourceColorState;
                    RenderSurface graphManagedSourceSurface = {};
                    if (!ResolveGraphManagedSourceSurface(
                            hasSourceSurface ? &sourceSurface : nullptr,
                            sourceColorView,
                            sourceColorState,
                            sourceColorTexture,
                            executionContext,
                            io,
                            resolvedSourceSurface,
                            resolvedSourceColorView,
                            resolvedSourceColorState,
                            graphManagedSourceSurface)) {
                        if (executionSucceeded != nullptr) {
                            *executionSucceeded = false;
                        }
                        return;
                    }

                    const RenderSurface* resolvedSurface = &surface;
                    RenderSurface graphManagedSurface = {};
                    if (!ResolveGraphManagedOutputSurface(
                            surface,
                            colorTargets,
                            depthTarget,
                            executionContext,
                            io,
                            resolvedSurface,
                            graphManagedSurface)) {
                        if (executionSucceeded != nullptr) {
                            *executionSucceeded = false;
                        }
                        return;
                    }

                    const RenderPassContext passContext = {
                        renderContext,
                        *resolvedSurface,
                        *sceneData,
                        resolvedSourceSurface,
                        resolvedSourceColorView,
                        resolvedSourceColorState
                    };
                    const bool executeResult = executePassCallback(passContext);
                    if (endPassCallback) {
                        endPassCallback(passContext);
                    }
                    if (executionSucceeded != nullptr) {
                        *executionSucceeded = executeResult;
                    }
                });
        });
    return true;
}

这段代码的意思就是:

  • graph 阶段先声明 IO。
  • 真执行时,把 graph-managed 纹理重新还原成一个 RenderPassContext
  • 然后照样调用老的 Execute()

所以它本质上是一个“把 immediate pass 包装成 graph pass 的桥”。

这是你当前渲染层能逐步演进到 SRP/URP 的关键基础设施之一。


8. BuiltinForwardPipeline 现在在这条链里处于什么位置

很多人会误会当前 BuiltinForwardPipeline 只是一个“老式 forward renderer”。

实际上现在不是。

8.1 它既能直接渲染,也能录制 MainScene stage graph

bool BuiltinForwardPipeline::SupportsStageRenderGraph(
    CameraFrameStage stage) const {
    return SupportsCameraFramePipelineGraphRecording(stage);
}

bool BuiltinForwardPipeline::RecordStageRenderGraph(
    const RenderPipelineStageRenderGraphContext& context) {
    return context.stage == CameraFrameStage::MainScene &&
           Internal::BuiltinForwardStageGraphBuilder::Record(*this, context);
}

bool BuiltinForwardPipeline::Render(
    const FrameExecutionContext& executionContext) {
    return ExecuteForwardSceneFrame(executionContext, true);
}

这说明它有两种工作方式:

  1. 老路径:直接 Render(...)
  2. 新路径:对 MainSceneRecordStageRenderGraph(...)

所以它已经是“双栈并存”的状态。

8.2 它的主场景内部也已经分成 scene phase 和 injection point

const std::array<ForwardSceneStep, 9>& GetBuiltinForwardSceneSteps() {
    static constexpr std::array<ForwardSceneStep, 9> kForwardSceneSteps = {
        MakeForwardSceneInjectionStep(SceneRenderInjectionPoint::BeforeOpaque),
        MakeForwardSceneBuiltinPhaseStep(ScenePhase::Opaque),
        MakeForwardSceneInjectionStep(SceneRenderInjectionPoint::AfterOpaque),
        MakeForwardSceneInjectionStep(SceneRenderInjectionPoint::BeforeSkybox),
        MakeForwardSceneBuiltinPhaseStep(ScenePhase::Skybox),
        MakeForwardSceneInjectionStep(SceneRenderInjectionPoint::AfterSkybox),
        MakeForwardSceneInjectionStep(SceneRenderInjectionPoint::BeforeTransparent),
        MakeForwardSceneBuiltinPhaseStep(ScenePhase::Transparent),
        MakeForwardSceneInjectionStep(SceneRenderInjectionPoint::AfterTransparent)
    };
    return kForwardSceneSteps;
}

这说明当前 forward 主场景不是“一个 opaque 函数 + 一个 transparent 函数”那么简单,而是已经有:

  • phase
  • injection point
  • feature host

这已经很接近 Unity RendererFeature 那类组织方式了,只不过现在还是 native C++ 版本。

8.3 现在确实有一些“未来 URP 层更应该负责的东西”还留在 C++ 里

比如 builtin feature 的注册:

void RegisterBuiltinForwardSceneFeatures(SceneRenderFeatureHost& featureHost) {
    featureHost.AddFeaturePass(std::make_unique<Features::BuiltinGaussianSplatPass>());
    featureHost.AddFeaturePass(std::make_unique<Features::BuiltinVolumetricPass>());
}

这段代码就很能说明当前状态:

  • BuiltinGaussianSplatPass
  • BuiltinVolumetricPass

这些东西今天还挂在 native builtin forward 里。

从长期架构看,这些更像 URP-like renderer feature 层应该组织的内容,而不是底层内核必须永久持有的内容。

8.4 BuiltinForwardStageGraphBuilder 怎么把主场景录进 graph

bool BuiltinForwardStageGraphBuilder::Record(
    BuiltinForwardPipeline& pipeline,
    const RenderPipelineStageRenderGraphContext& context) {
    const RenderSurface graphManagedSurface =
        BuildRenderGraphManagedSurfaceTemplate(context.surfaceTemplate);
    const RenderGraphRecordingContext baseRecordingContext =
        BuildRenderGraphRecordingContext(context);
    RenderGraphRecordingContextBuildParams recordingParams = {};
    recordingParams.surface = &graphManagedSurface;
    recordingParams.overrideSourceBinding = true;
    recordingParams.sourceBinding =
        BuildRenderGraphRecordingSourceBinding(baseRecordingContext);
    RenderGraphRecordingContext recordingContext =
        BuildRenderGraphRecordingContext(
            baseRecordingContext,
            std::move(recordingParams));
    const RenderPipelineStageRenderGraphContext graphContext =
        BuildRenderPipelineStageRenderGraphContext(
            recordingContext,
            CameraFrameStage::MainScene);
    const CameraFrameRenderGraphResources* const frameResources =
        TryGetCameraFrameRenderGraphResources(recordingContext.blackboard);
    const RenderGraphTextureHandle mainDirectionalShadowTexture =
        frameResources != nullptr
            ? frameResources->mainDirectionalShadow
            : RenderGraphTextureHandle{};
    bool* const executionSucceeded = recordingContext.executionSucceeded;
    const std::shared_ptr<ForwardSceneGraphExecutionState> graphExecutionState =
        std::make_shared<ForwardSceneGraphExecutionState>();
    bool clearAttachments = true;
    for (const ForwardSceneStep& step : GetBuiltinForwardSceneSteps()) {
        if (step.type == ForwardSceneStepType::InjectionPoint) {
            bool recordedAnyPass = false;
            if (!::XCEngine::Rendering::RecordRenderPipelineStageFeaturePasses(
                    graphContext,
                    pipeline.m_forwardSceneFeatureHost,
                    step.injectionPoint,
                    clearAttachments,
                    beginRecordedPass,
                    endRecordedPass,
                    &recordedAnyPass)) {
                return false;
            }

            if (recordedAnyPass) {
                clearAttachments = false;
            }
            continue;
        }

        const std::vector<RenderGraphTextureHandle> additionalReadTextures =
            ScenePhaseSamplesMainDirectionalShadow(step.scenePhase) &&
                mainDirectionalShadowTexture.IsValid()
            ? std::vector<RenderGraphTextureHandle>{ mainDirectionalShadowTexture }
            : std::vector<RenderGraphTextureHandle>{};
        if (!::XCEngine::Rendering::RecordRenderPipelineStagePhasePass(
                graphContext,
                step.scenePhase,
                [&pipeline, scenePhase = step.scenePhase](const RenderPassContext& passContext) {
                    const FrameExecutionContext executionContext(
                        passContext.renderContext,
                        passContext.surface,
                        passContext.sceneData,
                        passContext.sourceSurface,
                        passContext.sourceColorView,
                        passContext.sourceColorState);
                    const ScenePhaseExecutionContext scenePhaseExecutionContext =
                        pipeline.BuildScenePhaseExecutionContext(executionContext, scenePhase);
                    return pipeline.ExecuteBuiltinScenePhase(scenePhaseExecutionContext);
                },
                beginPhasePass,
                endRecordedPass,
                additionalReadTextures)) {
            return false;
        }
        clearAttachments = false;
    }

    return true;
}

这段代码特别值得你记住,因为它说明:

现在 BuiltinForwardPipeline 已经不是“自己包办所有流程”,而是在 MainScene 这个 stage 里,把内部 phase 和 feature 逐步录成 graph pass。

这是一个很重要的架构转折点。


9. SceneRenderFeatureHost:你现在其实已经有了 native 版 renderer feature 宿主

头文件:

class SceneRenderFeatureHost {
public:
    void AddFeaturePass(std::unique_ptr<SceneRenderFeaturePass> featurePass);
    size_t GetFeaturePassCount() const;
    SceneRenderFeaturePass* GetFeaturePass(size_t index) const;

    bool Initialize(const RenderContext& context);
    void Shutdown();
    bool Prepare(const FrameExecutionContext& executionContext) const;
    bool Record(
        const SceneRenderFeaturePassRenderGraphContext& context,
        SceneRenderInjectionPoint injectionPoint,
        bool* recordedAnyPass = nullptr) const;
    bool Execute(
        const FrameExecutionContext& executionContext,
        SceneRenderInjectionPoint injectionPoint) const;

private:
    std::vector<std::unique_ptr<SceneRenderFeaturePass>> m_featurePasses;
};

录制逻辑:

bool SceneRenderFeatureHost::Record(
    const SceneRenderFeaturePassRenderGraphContext& context,
    SceneRenderInjectionPoint injectionPoint,
    bool* recordedAnyPass) const {
    bool hasRecordedPass = false;
    bool clearAttachments = context.clearAttachments;

    for (size_t featureIndex = 0u; featureIndex < m_featurePasses.size(); ++featureIndex) {
        const std::unique_ptr<SceneRenderFeaturePass>& featurePassOwner = m_featurePasses[featureIndex];
        SceneRenderFeaturePass* featurePass = featurePassOwner.get();
        if (featurePass == nullptr ||
            !featurePass->SupportsInjectionPoint(injectionPoint) ||
            !featurePass->IsActive(context.sceneData)) {
            continue;
        }

        const SceneRenderFeaturePassRenderGraphContext featureContext =
            CloneSceneRenderFeaturePassRenderGraphContext(
                context,
                BuildFeatureGraphPassName(
                    context.passName,
                    injectionPoint,
                    *featurePass,
                    featureIndex),
                clearAttachments);
        if (!featurePass->RecordRenderGraph(featureContext)) {
            Debug::Logger::Get().Error(
                Debug::LogCategory::Rendering,
                (Containers::String("SceneRenderFeatureHost record failed at injection point '") +
                    ToString(injectionPoint) +
                    "': " +
                    featurePass->GetName()).CStr());
            return false;
        }

        hasRecordedPass = true;
        clearAttachments = false;
    }

    if (recordedAnyPass != nullptr) {
        *recordedAnyPass = hasRecordedPass;
    }
    return true;
}

这就是一个非常典型的“feature 注入宿主”:

  • 按 injection point 过滤。
  • 按 sceneData 判断激活状态。
  • 给每个 feature 生成 pass name。
  • 逐个录入 graph。

所以如果有人问“你现在是不是已经有一点 URP 的味道了”,答案是:

有,而且不止一点。只不过现在这些能力还主要在 native C++ 层。


10. 当前 SRP native 接缝做到哪一步了

10.1 ScriptableRenderPipelineHost 是现在最重要的 native SRP 边界

先看头文件:

class ScriptableRenderPipelineHost final : public RenderPipeline {
public:
    ScriptableRenderPipelineHost();
    explicit ScriptableRenderPipelineHost(
        std::unique_ptr<RenderPipelineRenderer> pipelineRenderer);
    explicit ScriptableRenderPipelineHost(
        std::shared_ptr<const RenderPipelineAsset> pipelineRendererAsset);
    ~ScriptableRenderPipelineHost() override;

    using RenderPipeline::Render;

    void SetStageRecorder(std::unique_ptr<RenderPipelineStageRecorder> stageRecorder);
    void SetPipelineRenderer(std::unique_ptr<RenderPipelineRenderer> pipelineRenderer);
    void SetPipelineRendererAsset(
        std::shared_ptr<const RenderPipelineAsset> pipelineRendererAsset);

    bool Initialize(const RenderContext& context) override;
    void Shutdown() override;
    bool SupportsStageRenderGraph(CameraFrameStage stage) const override;
    bool RecordStageRenderGraph(
        const RenderPipelineStageRenderGraphContext& context) override;
    bool Render(const FrameExecutionContext& executionContext) override;
    bool Render(
        const RenderContext& context,
        const RenderSurface& surface,
        const RenderSceneData& sceneData) override;
};

再看最关键的两个函数:

bool ScriptableRenderPipelineHost::SupportsStageRenderGraph(
    CameraFrameStage stage) const {
    return (m_stageRecorder != nullptr &&
            m_stageRecorder->SupportsStageRenderGraph(stage)) ||
           (m_pipelineRenderer != nullptr &&
            m_pipelineRenderer->SupportsStageRenderGraph(stage));
}

bool ScriptableRenderPipelineHost::RecordStageRenderGraph(
    const RenderPipelineStageRenderGraphContext& context) {
    if (!EnsureInitialized(context.renderContext)) {
        return false;
    }

    if (m_stageRecorder != nullptr &&
        m_stageRecorder->SupportsStageRenderGraph(context.stage)) {
        return m_stageRecorder->RecordStageRenderGraph(context);
    }

    return m_pipelineRenderer != nullptr &&
           m_pipelineRenderer->RecordStageRenderGraph(context);
}

这说明 ScriptableRenderPipelineHost 的架构意图非常明确:

  • 底下有一个 fallback renderer。
  • 上面可以再挂一个 stage recorder。
  • 如果 recorder 支持某 stage就优先 recorder。
  • 否则回退到底层 renderer。

这个设计就是 native SRP host seam。

10.2 它还会给宿主管线挂上默认 standalone pass

void ScriptableRenderPipelineHostAsset::ConfigurePipeline(
    RenderPipeline& pipeline) const {
    pipeline.SetCameraFrameStandalonePass(
        CameraFrameStage::ObjectId,
        std::make_unique<Passes::BuiltinObjectIdPass>());
    pipeline.SetCameraFrameStandalonePass(
        CameraFrameStage::DepthOnly,
        std::make_unique<Passes::BuiltinDepthOnlyPass>());
    pipeline.SetCameraFrameStandalonePass(
        CameraFrameStage::ShadowCaster,
        std::make_unique<Passes::BuiltinShadowCasterPass>());
}

也就是说,当前 host 并不是“空壳”。

它已经在承接一部分基础管线职责,只不过主场景 recorder 这部分还没真正切到 managed。


11. managed C# 侧当前到底做到哪一步了

结论先说:

现在的 managed SRP 还只是骨架还远远不是“Unity 那种 SRP 运行时”。

11.1 C# 的 ScriptableRenderPipelineAsset 现在几乎还是空壳

namespace XCEngine
{
    public abstract class ScriptableRenderPipelineAsset : RenderPipelineAsset
    {
        protected ScriptableRenderPipelineAsset()
        {
        }

        protected internal virtual ScriptableRenderPipeline CreatePipeline()
        {
            return null;
        }
    }
}

11.2 C# 的 ScriptableRenderPipeline 也只是最小骨架

namespace XCEngine
{
    public abstract class ScriptableRenderPipeline : Object
    {
        protected ScriptableRenderPipeline()
        {
        }

        protected internal virtual bool SupportsStageRenderGraph(
            CameraFrameStage stage)
        {
            return false;
        }

        protected internal virtual bool RecordStageRenderGraph(
            CameraFrameStage stage)
        {
            return false;
        }
    }
}

请注意,这里连真正的 graph/context 参数都没有。

也就是说C# 现在甚至还拿不到足够完整的录制上下文。

11.3 GraphicsSettings 现在只是记录“某个 C# 类型名”

public static class GraphicsSettings
{
    public static Type renderPipelineAssetType
    {
        get
        {
            string assemblyQualifiedName =
                InternalCalls.Rendering_GetRenderPipelineAssetTypeName();
            if (string.IsNullOrEmpty(assemblyQualifiedName))
            {
                return null;
            }

            return Type.GetType(assemblyQualifiedName, throwOnError: false);
        }
        set
        {
            if (value != null &&
                !typeof(ScriptableRenderPipelineAsset).IsAssignableFrom(value))
            {
                throw new ArgumentException(
                    "GraphicsSettings.renderPipelineAssetType must derive from ScriptableRenderPipelineAsset.",
                    nameof(value));
            }

            InternalCalls.Rendering_SetRenderPipelineAssetType(value);
        }
    }
}

当前这个 API 的语义是:

  • 把一个“类型信息”告诉 native。
  • 还不是创建并持有一个真实 managed asset 实例。

11.4 native 侧的 ManagedScriptableRenderPipelineAsset 也只是桥接骨架

头文件:

class ManagedScriptableRenderPipelineAsset final : public RenderPipelineAsset {
public:
    explicit ManagedScriptableRenderPipelineAsset(
        ManagedRenderPipelineAssetDescriptor descriptor);

    std::unique_ptr<RenderPipeline> CreatePipeline() const override;
    FinalColorSettings GetDefaultFinalColorSettings() const override;

private:
    ManagedRenderPipelineAssetDescriptor m_descriptor;
    ScriptableRenderPipelineHostAsset m_fallbackAsset;
};

class ManagedRenderPipelineBridge {
public:
    virtual ~ManagedRenderPipelineBridge() = default;

    virtual std::unique_ptr<RenderPipelineStageRecorder> CreateStageRecorder(
        const ManagedRenderPipelineAssetDescriptor&) const {
        return nullptr;
    }
};

实现:

std::unique_ptr<RenderPipeline> ManagedScriptableRenderPipelineAsset::CreatePipeline() const {
    std::unique_ptr<RenderPipeline> pipeline = m_fallbackAsset.CreatePipeline();
    auto* host = dynamic_cast<ScriptableRenderPipelineHost*>(pipeline.get());
    if (host == nullptr) {
        return pipeline;
    }

    const std::shared_ptr<const ManagedRenderPipelineBridge> bridge =
        GetManagedRenderPipelineBridgeStorage();
    if (bridge != nullptr) {
        host->SetStageRecorder(
            bridge->CreateStageRecorder(m_descriptor));
    }

    return pipeline;
}

这里的真实含义是:

  • 先创建一个 native fallback host。
  • 如果 bridge 存在,就给 host 塞一个 stage recorder。

注意,这里还不是:

  • native 持有真实 managed pipeline 实例
  • managed pipeline 拥有完整生命周期
  • managed pipeline 拥有完整 render context

所以现在还不能说“SRP 已经通了”,只能说:

native 侧已经为 SRP runtime 留好了接缝。


12. 当前架构的准确评价

12.1 现在已经做对了的地方

  1. 顶层主链已经从“直接渲染”切成了“planning + execution + graph”。
  2. CameraFramePlan 已经是一个真正有意义的每相机执行计划,不是样板结构。
  3. RenderGraph 已经可用,不是空壳。
  4. MainScene 已经能 graph-record而不是只有 immediate render。
  5. RenderPassGraphContract 让旧 pass 可以渐进接入 graph。
  6. SceneRenderFeatureHost 已经提供了 renderer feature 风格的注入模型。
  7. ScriptableRenderPipelineHost 已经是对的 native SRP 边界。

12.2 现在还没完全收口的地方

  1. managed SRP runtime 还没真正存在。
  2. C# pipeline API 还拿不到足够强的上下文。
  3. 现在很多“未来更应该属于 URP-like 包层”的组织逻辑还在 C++ builtin forward 里。
  4. RenderGraph 还是轻量版,没有做更强的优化能力。

12.3 当前哪些东西其实已经有点“URP 包层味道”

这个问题你之前已经问过很多次,这里给一个最准确的判断:

当前 C++ 渲染层里,确实已经做了一些未来更应该上移到 URP-like 包层的事情,比如:

  • 主场景 phase 组织
  • feature injection point 组织
  • 高斯/体积 feature 的注册
  • 后处理和 final output 颜色链规划
  • 默认阴影执行策略的一部分组织

但这不代表这些东西今天就必须马上搬走。

更准确的说法是:

现在 native 层里已经有了一套“可跑的 builtin renderer 组织层”,后面应该逐步把“组织权”上移,而不是立刻把所有底层实现都搬去 C#。


13. 下一步为什么不是直接开做 URP而是先做 SRP runtime

因为你现在最根上的缺口不是某个 pass也不是某个阴影算法。

最根上的缺口是:

managed pipeline 在运行时还没有真正存在。

如果这个问题不先解决,后面所有这些东西都会变空中楼阁:

  • UniversalRenderPipelineAsset
  • RendererFeature
  • 自定义 renderer
  • 延迟渲染管线
  • 光照贴图接入
  • 阴影/体积/高斯等组织权上移

原因很简单:

  • 现在 C# 还不能创建和持有真实 pipeline 实例。
  • 也没有真正的 ScriptableRenderContext
  • 也拿不到一帧 graph 录制所需的上下文。

所以你今天最该切的是 SRP runtime 主线,不是 URP 包层。


14. 我建议的下一步 SRP 主线计划

下面这套顺序,是我认为最稳、最符合你现在代码状态的方案。

阶段 1打通真实的 managed pipeline runtime

目标:

  1. native 不再只知道一个 C# 类型名。
  2. native 能创建并持有真实 managed asset / pipeline 实例。
  3. ManagedRenderPipelineBridge 不再只是测试桩式接缝。
  4. pipeline 生命周期可控,可初始化、可释放、可切换。

这一步做完你才算真正拥有“SRP runtime 的入口”。

阶段 2定义 ScriptableRenderContext v1

这里不要一上来暴露整套 RHI。

应该给 managed 提供受控的 native 能力包装,比如:

  1. 录制一个 raster/compute graph pass。
  2. 访问当前 camera frame 的 source / target / blackboard。
  3. 调用 native scene renderer 画 Opaque / Skybox / Transparent
  4. 调用 fullscreen / blit。
  5. 访问当前 stage 和 frame 语义。

这里的核心原则是:

C# 负责组织C++ 负责底层执行内核。

阶段 3做一个最小可用的 managed forward pipeline

目标不是一步到位做 URP。

目标是:

  1. C# 能创建 pipeline。
  2. C# 能参与 MainScene stage graph 录制。
  3. C# 能调 native scene renderer 画主场景。
  4. 这条 managed pipeline 能真实替换现在 builtin forward 的主场景组织。

这一步一旦打通SRP 主线就真正开始跑了。

阶段 4把 builtin forward 退化为 native renderer backend

这里的方向不是删掉 builtin forward而是改变它的角色。

从:

  • 一个完整的“默认整条渲染管线”

变成:

  • 一个 native scene renderer backend
  • 一个供 managed SRP 调度的默认 renderer

也就是说,未来 C# SRP 不应该直接依赖 BuiltinForwardPipeline 的整管线语义,而应该依赖一组更稳定的 native renderer contract。

阶段 5再开做 URP-like package

等前四步完成后,再开:

  • UniversalRenderPipelineAsset
  • UniversalRenderer
  • RendererFeature
  • RenderPassEvent
  • 阴影/后处理/体积/高斯等上层组织

到那时你做的就不是“在空壳上搭房子”,而是“在已经存在的 runtime 上做官方包层”。


15. 到 SRP 阶段时,哪些东西应该留在 C++,哪些东西应该上移到 C#

应该稳定留在 C++ 的

  1. RHI
  2. RenderGraph
  3. graph compiler / executor
  4. scene extraction / culling / frame data
  5. render surface / resource lifetime / barrier
  6. native draw/fullscreen/backend renderer contract

应该逐步上移到 managed SRP / URP-like 的

  1. pipeline asset 组织
  2. renderer asset / renderer feature / render pass 的上层调度
  3. 阴影默认策略
  4. 后处理链组织
  5. 体积、高斯、自定义效果的注入时机和排序
  6. 用户自定义管线的组合逻辑

RenderGraph 要不要做到 C++ 层

答案是:要,而且应该留在 C++ native 层。

但 managed 层需要拿到它的“受控包装”。

不要把 managed 直接变成:

  • 手搓 RHI barrier
  • 手搓 native texture 视图
  • 手搓所有底层状态机

正确方向是:

  • C++ 保留真正的 graph 内核。
  • C# 通过 ScriptableRenderContext/wrapper 参与 graph 录制和资源引用。

这才是小引擎里最稳、也最面向未来的方案。


16. 如果你现在要继续读源码,建议按这个顺序

第一轮,只看主链

  1. SceneRenderer
  2. SceneRenderRequestPlanner
  3. RenderPipelineHost
  4. CameraFramePlanBuilder
  5. CameraFramePlan
  6. CameraRenderer
  7. ExecuteCameraFrameRenderGraphPlan

第二轮,看 RenderGraph

  1. RenderGraph.h
  2. RenderGraph.cpp
  3. RenderGraphCompiler.cpp
  4. RenderGraphExecutor.cpp
  5. RenderGraphBlackboard.h

第三轮,看“相机一帧怎么录图”

  1. Recorder.cpp
  2. StageDispatch.cpp
  3. State.cpp
  4. StageContract.cpp
  5. SequenceRecorder.cpp
  6. PassRecorder.cpp
  7. SurfaceUtils.cpp

第四轮,看 builtin pipeline 和 SRP 接缝

  1. BuiltinForwardPipeline.h
  2. BuiltinForwardPipelineFrame.cpp
  3. BuiltinForwardSceneSetup.cpp
  4. BuiltinForwardStageGraphBuilder.cpp
  5. SceneRenderFeatureHost.cpp
  6. ScriptableRenderPipelineHost.cpp
  7. ManagedScriptableRenderPipelineAsset.cpp
  8. managed/XCEngine.ScriptCore/ScriptableRenderPipeline*.cs

最后总结一句

你当前这套渲染模块,真实状态可以概括成一句话:

native C++ 侧已经搭好了一个“基于 CameraFramePlanRenderGraph 的渲染执行内核”,并且已经留出了 ScriptableRenderPipelineHost 这个 SRP 接缝;但 managed C# 侧还没有真正形成可运行的 SRP runtime所以现在最正确的下一步不是直接做 URP 包,而是先把 managed SRP runtime 和 ScriptableRenderContext v1 打通。

如果你把这句话吃透,后面不管是做 SRP、URP-like、延迟渲染还是把阴影/体积/高斯逐步上移,方向都不会跑偏。