Add renderdoc_parser: direct-call Python interface for RenderDoc capture analysis
- Convert from MCP protocol layer to direct Python function calls - 42 functions across 9 modules: session, event, pipeline, resource, data, shader, advanced, performance, diagnostic - Requires Python 3.6 (renderdoc.pyd is compiled for Python 3.6) - Fix renderdoc API calls: GetColorBlends, GetStencilFaces, GetViewport(i), GetScissor(i) - Remove Python 3.10+ type annotations for Python 3.6 compatibility - Add README.md with full API documentation - Includes test.py for basic smoke testing
This commit is contained in:
153
engine/tools/renderdoc_parser/README.md
Normal file
153
engine/tools/renderdoc_parser/README.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# renderdoc_parser
|
||||
|
||||
Direct-call Python interface for RenderDoc capture analysis. No MCP protocol required — import and call functions directly.
|
||||
|
||||
**Requires Python 3.6** (the `renderdoc.pyd` extension module is compiled for Python 3.6).
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
import sys
|
||||
sys.path.insert(0, "engine/tools")
|
||||
from renderdoc_parser import open_capture, get_capture_info, get_frame_overview
|
||||
|
||||
open_capture("frame.rdc")
|
||||
print(get_capture_info())
|
||||
print(get_frame_overview())
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.6
|
||||
- `renderdoc.pyd` and `renderdoc.dll` in `engine/third_party/renderdoc/`
|
||||
- Supported APIs: D3D12, D3D11, Vulkan, OpenGL ES, OpenGL, Metal
|
||||
|
||||
## API Reference
|
||||
|
||||
### Session (4 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `open_capture(filepath)` | Open a `.rdc` capture file. Returns API type, action/resource counts. |
|
||||
| `close_capture()` | Close the current capture and free resources. |
|
||||
| `get_capture_info()` | Capture metadata: API, resolution, texture/buffer counts, GPU quirks. |
|
||||
| `get_frame_overview()` | Frame stats: draw calls, clears, dispatches, memory usage, render targets. |
|
||||
|
||||
### Event Navigation (5 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `list_actions(max_depth=2, filter_flags=None, filter=None, event_type=None)` | Draw call/action tree. `max_depth` limits recursion. `filter_flags` accepts `["Drawcall", "Clear", "Dispatch"]`. `event_type` shorthand: `"draw"`, `"dispatch"`, `"clear"`. |
|
||||
| `get_action(event_id)` | Get a single ActionDescription by event ID. |
|
||||
| `set_event(event_id)` | Navigate to a specific event. |
|
||||
| `search_actions(name_pattern=None, flags=None)` | Search actions by name substring or ActionFlags. |
|
||||
| `find_draws(filter=None)` | Find all draw calls, optionally filtered by name substring. |
|
||||
|
||||
### Pipeline State (4 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `get_pipeline_state(event_id)` | Full pipeline state at an event (shaders, blend, rasterizer, depth-stencil, etc.). |
|
||||
| `get_shader_bindings(event_id, stage=None)` | Shader resource bindings. `stage` accepts `"vertex"`, `"pixel"`, `"compute"`, etc. |
|
||||
| `get_vertex_inputs(event_id)` | Vertex input layout (format, semantic, register). |
|
||||
| `get_draw_call_state(event_id)` | Summary of the draw call at an event (indices, instances, topology, outputs). |
|
||||
|
||||
### Resources (4 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `list_textures()` | All textures with dimensions, format, mip levels, memory estimate. |
|
||||
| `list_buffers()` | All buffers with length and creation flags. |
|
||||
| `list_resources()` | All resources (textures + buffers) unified. |
|
||||
| `get_resource_usage(resource_id)` | Which events use a given resource and how. |
|
||||
|
||||
### Data Reading (8 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `save_texture(resource_id, path, fmt=None)` | Save a texture to file. `fmt` accepts `"png"`, `"jpg"`, `"bmp"`, `"tga"`, `"hdr"`, `"exr"`, `"dds"`. |
|
||||
| `get_buffer_data(resource_id)` | Raw bytes of a buffer. |
|
||||
| `pick_pixel(x, y, event_id=None)` | RGBA value at a pixel. Defaults to current event if `event_id` is `None`. |
|
||||
| `get_texture_stats(resource_id)` | Min/max/avg values for a texture. |
|
||||
| `read_texture_pixels(resource_id, x, y, w, h)` | Pixel region as a 2D list of RGBA values. |
|
||||
| `export_draw_textures(event_id, path)` | Save all render targets at a draw call to image files. |
|
||||
| `save_render_target(resource_id, path)` | Save a specific render target to file. |
|
||||
| `export_mesh(event_id, vertex_buffer=None, index_buffer=None)` | Export mesh data (positions, normals, UVs, indices) as a structured dict. |
|
||||
|
||||
### Shader Analysis (3 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `disassemble_shader(event_id, stage=None, target=None)` | Disassembled shader code. `stage` is required. |
|
||||
| `get_shader_reflection(event_id, stage=None)` | Shader reflection: inputs, outputs, cbuffers, textures, samplers. |
|
||||
| `get_cbuffer_contents(event_id, stage, slot)` | Contents of a constant buffer slot for a given stage. |
|
||||
|
||||
### Advanced (6 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `pixel_history(x, y)` | All events that wrote to or modified a pixel, with before/after values. |
|
||||
| `get_post_vs_data(event_id, stage="vertex")` | Post-vertex-shader data (positions, clip positions, cull distances). |
|
||||
| `diff_draw_calls(event_id1, event_id2)` | Diff two draw calls: state differences and likely causes. |
|
||||
| `analyze_render_passes(event_id)` | Render pass grouping: which draw calls contribute to which render targets. |
|
||||
| `sample_pixel_region(x, y, w, h, event_id=None)` | Sample all pixels in a region. Returns NaN/Inf/negative/bright pixel counts. |
|
||||
| `debug_shader_at_pixel(x, y, event_id=None)` | Step-by-step shader debug at a pixel. |
|
||||
|
||||
### Performance (4 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `get_pass_timing(granularity="pass", top_n=20)` | GPU timing per pass or per draw call. |
|
||||
| `analyze_overdraw(sample_count=64)` | Overdraw heatmap data: which pixels are shaded how many times. |
|
||||
| `analyze_bandwidth()` | Estimated memory bandwidth: texture reads, render target writes, buffer traffic. |
|
||||
| `analyze_state_changes(track=None)` | Track state changes between draw calls. `track` accepts `["blend", "depth", "stencil", "rasterizer", "shader", "vertex", "index", "viewport"]`. |
|
||||
|
||||
### Diagnostics (4 functions)
|
||||
|
||||
| Function | Description |
|
||||
|----------|-------------|
|
||||
| `diagnose_negative_values(event_id)` | Detect render targets receiving negative values — undefined behavior on many GPUs. |
|
||||
| `diagnose_precision_issues(event_id)` | Detect potential half-float precision issues in shaders. |
|
||||
| `diagnose_reflection_mismatch(event_id)` | Detect shader resource counts/types that differ from pipeline state. |
|
||||
| `diagnose_mobile_risks(event_id)` | Mobile GPU-specific risks based on detected driver (Adreno, Mali, PowerVR, Apple). |
|
||||
|
||||
## Return Format
|
||||
|
||||
All functions return a Python `dict`. On error, all functions return:
|
||||
|
||||
```python
|
||||
{"error": "human-readable message", "code": "ERROR_CODE"}
|
||||
```
|
||||
|
||||
Successful calls return a non-empty `dict` with function-specific data.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
engine/tools/renderdoc_parser/
|
||||
├── __init__.py # Re-exports all 42 functions at package level
|
||||
├── session.py # RenderDoc session singleton
|
||||
├── util.py # load_renderdoc(), serialization helpers, enum maps
|
||||
├── test.rdc # Sample capture for testing
|
||||
└── tools/
|
||||
├── __init__.py # Re-exports all functions by module
|
||||
├── session_tools.py # open_capture, close_capture, get_capture_info, get_frame_overview
|
||||
├── event_tools.py # list_actions, get_action, set_event, search_actions, find_draws
|
||||
├── pipeline_tools.py # get_pipeline_state, get_shader_bindings, get_vertex_inputs, get_draw_call_state
|
||||
├── resource_tools.py # list_textures, list_buffers, list_resources, get_resource_usage
|
||||
├── data_tools.py # save_texture, get_buffer_data, pick_pixel, get_texture_stats, ...
|
||||
├── shader_tools.py # disassemble_shader, get_shader_reflection, get_cbuffer_contents
|
||||
├── advanced_tools.py # pixel_history, get_post_vs_data, diff_draw_calls, ...
|
||||
├── performance_tools.py # get_pass_timing, analyze_overdraw, analyze_bandwidth, analyze_state_changes
|
||||
└── diagnostic_tools.py # diagnose_negative_values, diagnose_precision_issues, ...
|
||||
```
|
||||
|
||||
## GPU Quirks
|
||||
|
||||
`get_capture_info()` automatically detects the GPU/driver and returns known quirks:
|
||||
|
||||
- **Adreno**: mediump precision issues, R11G11B10_FLOAT signed behavior, textureLod bugs
|
||||
- **Mali**: R11G11B10_FLOAT undefined negative writes, discard early-Z interactions, mediump accumulation
|
||||
- **PowerVR**: tile-based deferred rendering considerations, sampler binding limits
|
||||
- **Apple GPU**: tile memory bandwidth, float16 performance recommendations
|
||||
- **OpenGL ES**: precision qualifier correctness, extension compatibility
|
||||
56
engine/tools/renderdoc_parser/__init__.py
Normal file
56
engine/tools/renderdoc_parser/__init__.py
Normal file
@@ -0,0 +1,56 @@
|
||||
"""renderdoc: Direct-call interface for RenderDoc capture analysis.
|
||||
|
||||
Usage:
|
||||
from renderdoc import open_capture, get_capture_info, get_draw_call_state
|
||||
|
||||
open_capture("frame.rdc")
|
||||
info = get_capture_info()
|
||||
state = get_draw_call_state(142)
|
||||
print(state)
|
||||
"""
|
||||
|
||||
from .session import get_session
|
||||
from .tools import (
|
||||
open_capture,
|
||||
close_capture,
|
||||
get_capture_info,
|
||||
get_frame_overview,
|
||||
list_actions,
|
||||
get_action,
|
||||
set_event,
|
||||
search_actions,
|
||||
find_draws,
|
||||
get_pipeline_state,
|
||||
get_shader_bindings,
|
||||
get_vertex_inputs,
|
||||
get_draw_call_state,
|
||||
list_textures,
|
||||
list_buffers,
|
||||
list_resources,
|
||||
get_resource_usage,
|
||||
save_texture,
|
||||
get_buffer_data,
|
||||
pick_pixel,
|
||||
get_texture_stats,
|
||||
read_texture_pixels,
|
||||
export_draw_textures,
|
||||
save_render_target,
|
||||
export_mesh,
|
||||
disassemble_shader,
|
||||
get_shader_reflection,
|
||||
get_cbuffer_contents,
|
||||
pixel_history,
|
||||
get_post_vs_data,
|
||||
diff_draw_calls,
|
||||
analyze_render_passes,
|
||||
sample_pixel_region,
|
||||
debug_shader_at_pixel,
|
||||
get_pass_timing,
|
||||
analyze_overdraw,
|
||||
analyze_bandwidth,
|
||||
analyze_state_changes,
|
||||
diagnose_negative_values,
|
||||
diagnose_precision_issues,
|
||||
diagnose_reflection_mismatch,
|
||||
diagnose_mobile_risks,
|
||||
)
|
||||
209
engine/tools/renderdoc_parser/session.py
Normal file
209
engine/tools/renderdoc_parser/session.py
Normal file
@@ -0,0 +1,209 @@
|
||||
"""RenderDoc session manager - singleton managing capture file lifecycle."""
|
||||
|
||||
import os
|
||||
from typing import Optional
|
||||
|
||||
from .util import rd, make_error
|
||||
|
||||
|
||||
class RenderDocSession:
|
||||
"""Manages a single RenderDoc capture file's lifecycle."""
|
||||
|
||||
def __init__(self):
|
||||
self._initialized: bool = False
|
||||
self._cap = None # CaptureFile
|
||||
self._controller = None # ReplayController
|
||||
self._filepath: Optional[str] = None
|
||||
self._current_event: Optional[int] = None
|
||||
self._action_map = {}
|
||||
self._structured_file = None
|
||||
self._resource_id_cache = {} # str(resourceId) → resourceId
|
||||
self._texture_desc_cache = {} # str(resourceId) → TextureDescription
|
||||
|
||||
def _ensure_initialized(self):
|
||||
"""Initialize the replay system if not already done."""
|
||||
if not self._initialized:
|
||||
rd.InitialiseReplay(rd.GlobalEnvironment(), [])
|
||||
self._initialized = True
|
||||
|
||||
@property
|
||||
def is_open(self) -> bool:
|
||||
return self._controller is not None
|
||||
|
||||
@property
|
||||
def filepath(self) -> Optional[str]:
|
||||
return self._filepath
|
||||
|
||||
@property
|
||||
def controller(self):
|
||||
return self._controller
|
||||
|
||||
@property
|
||||
def structured_file(self):
|
||||
return self._structured_file
|
||||
|
||||
@property
|
||||
def current_event(self) -> Optional[int]:
|
||||
return self._current_event
|
||||
|
||||
@property
|
||||
def driver_name(self) -> str:
|
||||
"""Get the API/driver name of the current capture."""
|
||||
return self._cap.DriverName() if self._cap else "unknown"
|
||||
|
||||
@property
|
||||
def action_map(self):
|
||||
"""Get the event_id -> ActionDescription mapping."""
|
||||
return self._action_map
|
||||
|
||||
def require_open(self) -> Optional[dict]:
|
||||
"""Return an error dict if no capture is open, else None."""
|
||||
if not self.is_open:
|
||||
return make_error(
|
||||
"No capture file is open. Use open_capture first.", "NO_CAPTURE_OPEN"
|
||||
)
|
||||
return None
|
||||
|
||||
def open(self, filepath: str) -> dict:
|
||||
"""Open a .rdc capture file. Closes any previously open capture."""
|
||||
self._ensure_initialized()
|
||||
|
||||
if not os.path.isfile(filepath):
|
||||
return make_error(f"File not found: {filepath}", "API_ERROR")
|
||||
|
||||
# Close any existing capture
|
||||
if self.is_open:
|
||||
self.close()
|
||||
|
||||
cap = rd.OpenCaptureFile()
|
||||
result = cap.OpenFile(filepath, "", None)
|
||||
if result != rd.ResultCode.Succeeded:
|
||||
cap.Shutdown()
|
||||
return make_error(f"Failed to open file: {result}", "API_ERROR")
|
||||
|
||||
if not cap.LocalReplaySupport():
|
||||
cap.Shutdown()
|
||||
return make_error("Capture cannot be replayed on this machine", "API_ERROR")
|
||||
|
||||
result, controller = cap.OpenCapture(rd.ReplayOptions(), None)
|
||||
if result != rd.ResultCode.Succeeded:
|
||||
cap.Shutdown()
|
||||
return make_error(f"Failed to initialize replay: {result}", "API_ERROR")
|
||||
|
||||
self._cap = cap
|
||||
self._controller = controller
|
||||
self._filepath = filepath
|
||||
self._current_event = None
|
||||
self._structured_file = controller.GetStructuredFile()
|
||||
|
||||
# Build action map
|
||||
self._action_map = {}
|
||||
self._build_action_map(controller.GetRootActions())
|
||||
|
||||
# Build resource caches
|
||||
self._resource_id_cache = {}
|
||||
self._texture_desc_cache = {}
|
||||
textures = controller.GetTextures()
|
||||
buffers = controller.GetBuffers()
|
||||
for tex in textures:
|
||||
key = str(tex.resourceId)
|
||||
self._resource_id_cache[key] = tex.resourceId
|
||||
self._texture_desc_cache[key] = tex
|
||||
for buf in buffers:
|
||||
key = str(buf.resourceId)
|
||||
self._resource_id_cache[key] = buf.resourceId
|
||||
for res in controller.GetResources():
|
||||
key = str(res.resourceId)
|
||||
if key not in self._resource_id_cache:
|
||||
self._resource_id_cache[key] = res.resourceId
|
||||
|
||||
# Gather summary
|
||||
root_actions = controller.GetRootActions()
|
||||
|
||||
return {
|
||||
"filepath": filepath,
|
||||
"api": cap.DriverName(),
|
||||
"total_actions": len(self._action_map),
|
||||
"root_actions": len(root_actions),
|
||||
"textures": len(textures),
|
||||
"buffers": len(buffers),
|
||||
}
|
||||
|
||||
def _build_action_map(self, actions):
|
||||
"""Recursively index all actions by event_id."""
|
||||
for a in actions:
|
||||
self._action_map[a.eventId] = a
|
||||
if len(a.children) > 0:
|
||||
self._build_action_map(a.children)
|
||||
|
||||
def resolve_resource_id(self, resource_id_str: str):
|
||||
"""Resolve a resource ID string to a ResourceId object, or None."""
|
||||
return self._resource_id_cache.get(resource_id_str)
|
||||
|
||||
def get_texture_desc(self, resource_id_str: str):
|
||||
"""Get a TextureDescription by resource ID string, or None."""
|
||||
return self._texture_desc_cache.get(resource_id_str)
|
||||
|
||||
def close(self) -> dict:
|
||||
"""Close the current capture."""
|
||||
if not self.is_open:
|
||||
return {"status": "no capture was open"}
|
||||
|
||||
filepath = self._filepath
|
||||
self._controller.Shutdown()
|
||||
self._cap.Shutdown()
|
||||
self._controller = None
|
||||
self._cap = None
|
||||
self._filepath = None
|
||||
self._current_event = None
|
||||
self._action_map = {}
|
||||
self._structured_file = None
|
||||
self._resource_id_cache = {}
|
||||
self._texture_desc_cache = {}
|
||||
return {"status": "closed", "filepath": filepath}
|
||||
|
||||
def set_event(self, event_id: int) -> Optional[dict]:
|
||||
"""Navigate to a specific event. Returns error dict or None on success."""
|
||||
if event_id not in self._action_map:
|
||||
return make_error(f"Event ID {event_id} not found", "INVALID_EVENT_ID")
|
||||
self._controller.SetFrameEvent(event_id, True)
|
||||
self._current_event = event_id
|
||||
return None
|
||||
|
||||
def ensure_event(self, event_id: Optional[int]) -> Optional[dict]:
|
||||
"""If event_id is given, set it. If no event is current, return error.
|
||||
Returns error dict or None on success."""
|
||||
if event_id is not None:
|
||||
return self.set_event(event_id)
|
||||
if self._current_event is None:
|
||||
return make_error(
|
||||
"No event selected. Use set_event or pass event_id.", "INVALID_EVENT_ID"
|
||||
)
|
||||
return None
|
||||
|
||||
def get_action(self, event_id: int):
|
||||
"""Get an ActionDescription by event_id, or None."""
|
||||
return self._action_map.get(event_id)
|
||||
|
||||
def get_root_actions(self):
|
||||
return self._controller.GetRootActions()
|
||||
|
||||
def shutdown(self):
|
||||
"""Full shutdown - close capture and deinitialize replay."""
|
||||
if self.is_open:
|
||||
self.close()
|
||||
if self._initialized:
|
||||
rd.ShutdownReplay()
|
||||
self._initialized = False
|
||||
|
||||
|
||||
# Module-level singleton
|
||||
_session: Optional[RenderDocSession] = None
|
||||
|
||||
|
||||
def get_session() -> RenderDocSession:
|
||||
"""Get or create the global RenderDocSession singleton."""
|
||||
global _session
|
||||
if _session is None:
|
||||
_session = RenderDocSession()
|
||||
return _session
|
||||
32
engine/tools/renderdoc_parser/tools/__init__.py
Normal file
32
engine/tools/renderdoc_parser/tools/__init__.py
Normal file
@@ -0,0 +1,32 @@
|
||||
"""Direct-call tools - all functions exposed at package level for direct import."""
|
||||
|
||||
from .session_tools import (
|
||||
open_capture, close_capture, get_capture_info, get_frame_overview,
|
||||
)
|
||||
from .event_tools import (
|
||||
list_actions, get_action, set_event, search_actions, find_draws,
|
||||
)
|
||||
from .pipeline_tools import (
|
||||
get_pipeline_state, get_shader_bindings, get_vertex_inputs, get_draw_call_state,
|
||||
)
|
||||
from .resource_tools import (
|
||||
list_textures, list_buffers, list_resources, get_resource_usage,
|
||||
)
|
||||
from .data_tools import (
|
||||
save_texture, get_buffer_data, pick_pixel, get_texture_stats,
|
||||
read_texture_pixels, export_draw_textures, save_render_target, export_mesh,
|
||||
)
|
||||
from .shader_tools import (
|
||||
disassemble_shader, get_shader_reflection, get_cbuffer_contents,
|
||||
)
|
||||
from .advanced_tools import (
|
||||
pixel_history, get_post_vs_data, diff_draw_calls, analyze_render_passes,
|
||||
sample_pixel_region, debug_shader_at_pixel,
|
||||
)
|
||||
from .performance_tools import (
|
||||
get_pass_timing, analyze_overdraw, analyze_bandwidth, analyze_state_changes,
|
||||
)
|
||||
from .diagnostic_tools import (
|
||||
diagnose_negative_values, diagnose_precision_issues,
|
||||
diagnose_reflection_mismatch, diagnose_mobile_risks,
|
||||
)
|
||||
784
engine/tools/renderdoc_parser/tools/advanced_tools.py
Normal file
784
engine/tools/renderdoc_parser/tools/advanced_tools.py
Normal file
@@ -0,0 +1,784 @@
|
||||
"""Advanced tools: pixel_history, get_post_vs_data, diff_draw_calls, analyze_render_passes,
|
||||
sample_pixel_region, debug_shader_at_pixel."""
|
||||
|
||||
import math
|
||||
import struct
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
flags_to_list,
|
||||
MESH_DATA_STAGE_MAP,
|
||||
serialize_shader_variable,
|
||||
SHADER_STAGE_MAP,
|
||||
)
|
||||
|
||||
|
||||
def sample_pixel_region(
|
||||
event_id: Optional[int] = None,
|
||||
resource_id: Optional[str] = None,
|
||||
region: Optional[dict] = None,
|
||||
sample_count: int = 256,
|
||||
anomaly_threshold: float = 10.0,
|
||||
) -> dict:
|
||||
"""Batch-sample pixels from a render target and auto-detect anomalies.
|
||||
|
||||
Scans for NaN / Inf / negative values and extreme-bright pixels.
|
||||
Best tool for locating anomalous color values and IBL / HDR issues.
|
||||
|
||||
Args:
|
||||
event_id: Sample at this event's render target state. Uses current event if omitted.
|
||||
resource_id: Specific render target resource ID. If omitted, uses the first
|
||||
color output of the current event.
|
||||
region: Optional sampling region {"x":0,"y":0,"width":W,"height":H}.
|
||||
If omitted, samples the full texture uniformly.
|
||||
sample_count: Number of sample points (default 256, max 1024).
|
||||
anomaly_threshold: Pixels with any channel exceeding this value are flagged
|
||||
as extreme-bright (useful for HDR targets). Default 10.0.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
if resource_id is not None:
|
||||
tex_id = session.resolve_resource_id(resource_id)
|
||||
if tex_id is None:
|
||||
return make_error(
|
||||
f"Resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
tex_desc = session.get_texture_desc(resource_id)
|
||||
else:
|
||||
state = session.controller.GetPipelineState()
|
||||
try:
|
||||
outputs = state.GetOutputTargets()
|
||||
color_target = next((o for o in outputs if int(o.resource) != 0), None)
|
||||
except Exception:
|
||||
color_target = None
|
||||
if color_target is None:
|
||||
return make_error("No color render target at current event", "API_ERROR")
|
||||
rid_str = str(color_target.resource)
|
||||
tex_id = session.resolve_resource_id(rid_str)
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
resource_id = rid_str
|
||||
|
||||
if tex_desc is None:
|
||||
return make_error("Could not get texture description", "API_ERROR")
|
||||
|
||||
tex_w, tex_h = tex_desc.width, tex_desc.height
|
||||
if region:
|
||||
rx = max(0, region.get("x", 0))
|
||||
ry = max(0, region.get("y", 0))
|
||||
rw = min(region.get("width", tex_w), tex_w - rx)
|
||||
rh = min(region.get("height", tex_h), tex_h - ry)
|
||||
else:
|
||||
rx, ry, rw, rh = 0, 0, tex_w, tex_h
|
||||
|
||||
sample_count = min(sample_count, 1024)
|
||||
|
||||
import math as _math
|
||||
|
||||
cols = max(1, int(_math.sqrt(sample_count * rw / max(rh, 1))))
|
||||
rows = max(1, sample_count // cols)
|
||||
step_x = max(1, rw // cols)
|
||||
step_y = max(1, rh // rows)
|
||||
|
||||
sample_points: list = []
|
||||
for r in range(rows):
|
||||
for c in range(cols):
|
||||
px = rx + c * step_x + step_x // 2
|
||||
py = ry + r * step_y + step_y // 2
|
||||
if px < rx + rw and py < ry + rh:
|
||||
sample_points.append((px, py))
|
||||
|
||||
nan_count = inf_count = neg_count = bright_count = 0
|
||||
hotspots: list = []
|
||||
values_r: list = []
|
||||
values_g: list = []
|
||||
values_b: list = []
|
||||
|
||||
for px, py in sample_points:
|
||||
try:
|
||||
val = session.controller.PickPixel(
|
||||
tex_id, px, py, rd.Subresource(0, 0, 0), rd.CompType.Typeless
|
||||
)
|
||||
r, g, b = val.floatValue[0], val.floatValue[1], val.floatValue[2]
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
pixel = [round(r, 6), round(g, 6), round(b, 6)]
|
||||
anomaly_type: Optional[str] = None
|
||||
|
||||
if any(_math.isnan(v) for v in [r, g, b]):
|
||||
nan_count += 1
|
||||
anomaly_type = "NaN"
|
||||
elif any(_math.isinf(v) for v in [r, g, b]):
|
||||
inf_count += 1
|
||||
anomaly_type = "Inf"
|
||||
pixel = [
|
||||
"+Inf"
|
||||
if _math.isinf(v) and v > 0
|
||||
else ("-Inf" if _math.isinf(v) else v)
|
||||
for v in pixel
|
||||
]
|
||||
elif any(v < 0 for v in [r, g, b]):
|
||||
neg_count += 1
|
||||
anomaly_type = "negative"
|
||||
elif max(r, g, b) > anomaly_threshold:
|
||||
bright_count += 1
|
||||
anomaly_type = "extreme_bright"
|
||||
|
||||
if anomaly_type:
|
||||
hotspots.append({"pixel": [px, py], "value": pixel, "type": anomaly_type})
|
||||
|
||||
if not any(_math.isnan(v) or _math.isinf(v) for v in [r, g, b]):
|
||||
values_r.append(r)
|
||||
values_g.append(g)
|
||||
values_b.append(b)
|
||||
|
||||
total = len(sample_points)
|
||||
anomaly_total = nan_count + inf_count + neg_count + bright_count
|
||||
|
||||
stats: dict = {}
|
||||
for ch, vals in [("r", values_r), ("g", values_g), ("b", values_b)]:
|
||||
if vals:
|
||||
stats[ch] = {
|
||||
"min": round(min(vals), 6),
|
||||
"max": round(max(vals), 6),
|
||||
"mean": round(sum(vals) / len(vals), 6),
|
||||
}
|
||||
|
||||
MAX_HOTSPOTS = 20
|
||||
hotspots_out = hotspots[:MAX_HOTSPOTS]
|
||||
|
||||
hints: list = []
|
||||
if inf_count > 0:
|
||||
hints.append(
|
||||
f"Inf 像素 {inf_count} 个——建议对这些像素执行 pixel_history 追踪来源"
|
||||
)
|
||||
if neg_count > 0:
|
||||
hints.append(
|
||||
f"负值像素 {neg_count} 个——可能来自 IBL SH 采样或 HDR 溢出,建议检查写入该 RT 的 draw call"
|
||||
)
|
||||
if nan_count > 0:
|
||||
hints.append(f"NaN 像素 {nan_count} 个——通常由除以零或 0/0 运算引起")
|
||||
if bright_count > 0:
|
||||
hints.append(
|
||||
f"极亮像素 {bright_count} 个(>{anomaly_threshold})——可能溢出或曝光异常"
|
||||
)
|
||||
|
||||
result = {
|
||||
"resource_id": resource_id,
|
||||
"render_target": f"{getattr(tex_desc, 'name', None) or resource_id} ({tex_desc.format.Name()}) {tex_w}x{tex_h}",
|
||||
"total_samples": total,
|
||||
"anomalies": {
|
||||
"nan_count": nan_count,
|
||||
"inf_count": inf_count,
|
||||
"negative_count": neg_count,
|
||||
"extreme_bright_count": bright_count,
|
||||
"anomaly_total": anomaly_total,
|
||||
"anomaly_rate": f"{anomaly_total / max(total, 1) * 100:.1f}%",
|
||||
},
|
||||
"hotspots": hotspots_out,
|
||||
"statistics": stats,
|
||||
}
|
||||
if hints:
|
||||
result["diagnosis_hint"] = " | ".join(hints)
|
||||
return result
|
||||
|
||||
|
||||
def debug_shader_at_pixel(
|
||||
event_id: int,
|
||||
pixel_x: int,
|
||||
pixel_y: int,
|
||||
stage: str = "pixel",
|
||||
watch_variables: Optional[list] = None,
|
||||
) -> dict:
|
||||
"""Debug the shader at a specific pixel using RenderDoc's shader debugger.
|
||||
|
||||
Executes the shader for the given pixel and returns intermediate variable
|
||||
values at each step. Useful for tracing negative values, IBL computation
|
||||
errors, TAA artifacts, and other precision issues.
|
||||
|
||||
Note: Shader debugging may not be supported for all API/GPU combinations.
|
||||
If unsupported, falls back to reporting pixel value and bound shader info.
|
||||
|
||||
Args:
|
||||
event_id: The event ID of the draw call.
|
||||
pixel_x: X coordinate (render target space, origin top-left).
|
||||
pixel_y: Y coordinate.
|
||||
stage: Shader stage to debug: "pixel" (default) or "vertex".
|
||||
watch_variables: Optional list of variable names to focus on
|
||||
(e.g. ["color", "iblDiffuse", "exposure"]).
|
||||
If omitted, all variables are returned (may be large).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.set_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
stage_enum = SHADER_STAGE_MAP.get(stage.lower())
|
||||
if stage_enum is None:
|
||||
return make_error(f"Unknown shader stage: {stage}", "API_ERROR")
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
refl = state.GetShaderReflection(stage_enum)
|
||||
if refl is None:
|
||||
return make_error(f"No shader bound at stage '{stage}'", "API_ERROR")
|
||||
|
||||
pixel_val_info: dict = {}
|
||||
try:
|
||||
outputs = state.GetOutputTargets()
|
||||
if outputs:
|
||||
first_rt = next((o for o in outputs if int(o.resource) != 0), None)
|
||||
if first_rt:
|
||||
rt_id = first_rt.resource
|
||||
pv = session.controller.PickPixel(
|
||||
rt_id,
|
||||
pixel_x,
|
||||
pixel_y,
|
||||
rd.Subresource(0, 0, 0),
|
||||
rd.CompType.Typeless,
|
||||
)
|
||||
pixel_val_info = {
|
||||
"current_pixel_rgba": [
|
||||
round(pv.floatValue[0], 6),
|
||||
round(pv.floatValue[1], 6),
|
||||
round(pv.floatValue[2], 6),
|
||||
round(pv.floatValue[3], 6),
|
||||
],
|
||||
}
|
||||
r, g, b = pv.floatValue[0], pv.floatValue[1], pv.floatValue[2]
|
||||
anomalies = []
|
||||
if any(math.isnan(v) for v in [r, g, b]):
|
||||
anomalies.append("NaN detected in output pixel")
|
||||
if any(math.isinf(v) for v in [r, g, b]):
|
||||
anomalies.append("Inf detected in output pixel")
|
||||
if any(v < 0 for v in [r, g, b]):
|
||||
anomalies.append(
|
||||
f"Negative value in output: [{r:.4f}, {g:.4f}, {b:.4f}]"
|
||||
)
|
||||
if anomalies:
|
||||
pixel_val_info["pixel_anomalies"] = anomalies
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
trace_result: Optional[dict] = None
|
||||
debug_error: Optional[str] = None
|
||||
try:
|
||||
if stage_enum == rd.ShaderStage.Pixel:
|
||||
shader_debug = session.controller.DebugPixel(
|
||||
pixel_x,
|
||||
pixel_y,
|
||||
rd.DebugPixelInputs(),
|
||||
)
|
||||
else:
|
||||
shader_debug = None
|
||||
|
||||
if (
|
||||
shader_debug is not None
|
||||
and hasattr(shader_debug, "states")
|
||||
and shader_debug.states
|
||||
):
|
||||
states = shader_debug.states
|
||||
steps: list = []
|
||||
|
||||
for step_state in states:
|
||||
step: dict = {}
|
||||
try:
|
||||
changed = []
|
||||
if hasattr(step_state, "locals"):
|
||||
for var in step_state.locals:
|
||||
vname = var.name
|
||||
if watch_variables and not any(
|
||||
w.lower() in vname.lower() for w in watch_variables
|
||||
):
|
||||
continue
|
||||
sv = serialize_shader_variable(var)
|
||||
val = sv.get("value", [])
|
||||
if isinstance(val, list):
|
||||
flat = [v for v in val if isinstance(v, (int, float))]
|
||||
if any(math.isnan(v) for v in flat):
|
||||
sv["warning"] = f"⚠️ {vname} 包含 NaN"
|
||||
elif any(math.isinf(v) for v in flat):
|
||||
sv["warning"] = f"⚠️ {vname} 包含 Inf"
|
||||
elif any(v < 0 for v in flat):
|
||||
sv["warning"] = f"⚠️ {vname} 包含负值: {flat}"
|
||||
changed.append(sv)
|
||||
if changed:
|
||||
step["variables"] = changed
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if step:
|
||||
steps.append(step)
|
||||
|
||||
trace_result = {
|
||||
"steps_with_changes": len(steps),
|
||||
"trace": steps[:200],
|
||||
}
|
||||
else:
|
||||
debug_error = "DebugPixel returned no trace data (may not be supported for this API/GPU)"
|
||||
except Exception as e:
|
||||
debug_error = f"Shader debugging failed: {type(e).__name__}: {e}"
|
||||
|
||||
result: dict = {
|
||||
"event_id": event_id,
|
||||
"pixel": [pixel_x, pixel_y],
|
||||
"stage": stage,
|
||||
"shader_resource_id": str(refl.resourceId),
|
||||
"entry_point": refl.entryPoint,
|
||||
}
|
||||
result.update(pixel_val_info)
|
||||
|
||||
if trace_result is not None:
|
||||
result["debug_trace"] = trace_result
|
||||
else:
|
||||
result["debug_note"] = debug_error or "Shader trace unavailable"
|
||||
result["fallback_info"] = {
|
||||
"constant_blocks": [cb.name for cb in refl.constantBlocks],
|
||||
"read_only_resources": [r.name for r in refl.readOnlyResources],
|
||||
"suggestion": "Use get_cbuffer_contents and pixel_history for manual investigation",
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def pixel_history(
|
||||
resource_id: str,
|
||||
x: int,
|
||||
y: int,
|
||||
event_id: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Get the full modification history of a pixel across all events in the frame.
|
||||
|
||||
Shows every event that wrote to this pixel, with before/after values and
|
||||
pass/fail status (depth test, stencil test, etc.).
|
||||
|
||||
Args:
|
||||
resource_id: The texture resource ID (must be a render target).
|
||||
x: X coordinate of the pixel.
|
||||
y: Y coordinate of the pixel.
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
tex_id = session.resolve_resource_id(resource_id)
|
||||
if tex_id is None:
|
||||
return make_error(
|
||||
f"Texture resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
history = session.controller.PixelHistory(
|
||||
tex_id,
|
||||
x,
|
||||
y,
|
||||
rd.Subresource(0, 0, 0),
|
||||
rd.CompType.Typeless,
|
||||
)
|
||||
|
||||
results = []
|
||||
for mod in history:
|
||||
passed = mod.Passed()
|
||||
entry: dict = {
|
||||
"event_id": mod.eventId,
|
||||
"passed": passed,
|
||||
}
|
||||
if not passed:
|
||||
failure_reasons = []
|
||||
try:
|
||||
if mod.backfaceCulled:
|
||||
failure_reasons.append("backface_culled")
|
||||
if mod.depthTestFailed:
|
||||
failure_reasons.append("depth_test_failed")
|
||||
if mod.stencilTestFailed:
|
||||
failure_reasons.append("stencil_test_failed")
|
||||
if mod.scissorClipped:
|
||||
failure_reasons.append("scissor_clipped")
|
||||
if mod.shaderDiscarded:
|
||||
failure_reasons.append("shader_discarded")
|
||||
if mod.depthClipped:
|
||||
failure_reasons.append("depth_clipped")
|
||||
except Exception:
|
||||
pass
|
||||
if failure_reasons:
|
||||
entry["failure_reasons"] = failure_reasons
|
||||
pre = mod.preMod
|
||||
entry["pre_value"] = {
|
||||
"r": pre.col.floatValue[0],
|
||||
"g": pre.col.floatValue[1],
|
||||
"b": pre.col.floatValue[2],
|
||||
"a": pre.col.floatValue[3],
|
||||
"depth": pre.depth,
|
||||
"stencil": pre.stencil,
|
||||
}
|
||||
post = mod.postMod
|
||||
entry["post_value"] = {
|
||||
"r": post.col.floatValue[0],
|
||||
"g": post.col.floatValue[1],
|
||||
"b": post.col.floatValue[2],
|
||||
"a": post.col.floatValue[3],
|
||||
"depth": post.depth,
|
||||
"stencil": post.stencil,
|
||||
}
|
||||
|
||||
entry["pixel_changed"] = (
|
||||
pre.col.floatValue[0] != post.col.floatValue[0]
|
||||
or pre.col.floatValue[1] != post.col.floatValue[1]
|
||||
or pre.col.floatValue[2] != post.col.floatValue[2]
|
||||
or pre.col.floatValue[3] != post.col.floatValue[3]
|
||||
)
|
||||
|
||||
results.append(entry)
|
||||
|
||||
return {
|
||||
"resource_id": resource_id,
|
||||
"x": x,
|
||||
"y": y,
|
||||
"modifications": results,
|
||||
"count": len(results),
|
||||
}
|
||||
|
||||
|
||||
def get_post_vs_data(
|
||||
stage: str = "vsout",
|
||||
max_vertices: int = 100,
|
||||
event_id: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Get post-vertex-shader transformed vertex data for the current draw call.
|
||||
|
||||
Args:
|
||||
stage: Data stage: "vsin" (vertex input), "vsout" (after vertex shader),
|
||||
"gsout" (after geometry shader). Default: "vsout".
|
||||
max_vertices: Maximum number of vertices to return (default 100).
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
mesh_stage = MESH_DATA_STAGE_MAP.get(stage.lower())
|
||||
if mesh_stage is None:
|
||||
return make_error(
|
||||
f"Unknown mesh stage: {stage}. Valid: {list(MESH_DATA_STAGE_MAP.keys())}",
|
||||
"API_ERROR",
|
||||
)
|
||||
|
||||
postvs = session.controller.GetPostVSData(0, 0, mesh_stage)
|
||||
|
||||
if postvs.vertexResourceId == rd.ResourceId.Null():
|
||||
return make_error("No post-VS data available for current event", "API_ERROR")
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
if stage.lower() == "vsin":
|
||||
attrs = state.GetVertexInputs()
|
||||
attr_info = [{"name": a.name, "format": str(a.format.Name())} for a in attrs]
|
||||
else:
|
||||
refl_stage = rd.ShaderStage.Vertex
|
||||
if stage.lower() == "gsout":
|
||||
gs_refl = state.GetShaderReflection(rd.ShaderStage.Geometry)
|
||||
if gs_refl is not None:
|
||||
refl_stage = rd.ShaderStage.Geometry
|
||||
vs_refl = state.GetShaderReflection(refl_stage)
|
||||
if vs_refl is None:
|
||||
return make_error("No shader bound for requested stage", "API_ERROR")
|
||||
attr_info = []
|
||||
for sig in vs_refl.outputSignature:
|
||||
name = sig.semanticIdxName if sig.varName == "" else sig.varName
|
||||
attr_info.append(
|
||||
{
|
||||
"name": name,
|
||||
"var_type": str(sig.varType),
|
||||
"comp_count": sig.compCount,
|
||||
"system_value": str(sig.systemValue),
|
||||
}
|
||||
)
|
||||
|
||||
num_verts = min(postvs.numIndices, max_vertices)
|
||||
data = session.controller.GetBufferData(
|
||||
postvs.vertexResourceId,
|
||||
postvs.vertexByteOffset,
|
||||
num_verts * postvs.vertexByteStride,
|
||||
)
|
||||
|
||||
vertices = []
|
||||
floats_per_vertex = postvs.vertexByteStride // 4
|
||||
if floats_per_vertex == 0:
|
||||
return make_error(
|
||||
f"Invalid vertex stride ({postvs.vertexByteStride} bytes), cannot parse vertex data",
|
||||
"API_ERROR",
|
||||
)
|
||||
for i in range(num_verts):
|
||||
offset = i * postvs.vertexByteStride
|
||||
if offset + postvs.vertexByteStride > len(data):
|
||||
break
|
||||
vertex_floats = list(struct.unpack_from(f"{floats_per_vertex}f", data, offset))
|
||||
vertices.append([round(f, 6) for f in vertex_floats])
|
||||
|
||||
return {
|
||||
"stage": stage,
|
||||
"event_id": session.current_event,
|
||||
"attributes": attr_info,
|
||||
"vertex_stride": postvs.vertexByteStride,
|
||||
"total_vertices": postvs.numIndices,
|
||||
"returned_vertices": len(vertices),
|
||||
"vertices": vertices,
|
||||
}
|
||||
|
||||
|
||||
def diff_draw_calls(eid1: int, eid2: int) -> dict:
|
||||
"""Compare two draw calls and return their state differences.
|
||||
|
||||
Useful for understanding what changed between two similar draw calls.
|
||||
|
||||
Args:
|
||||
eid1: Event ID of the first draw call.
|
||||
eid2: Event ID of the second draw call.
|
||||
"""
|
||||
from ..tools.pipeline_tools import _get_draw_state_dict
|
||||
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
state1 = _get_draw_state_dict(session, eid1)
|
||||
if "error" in state1:
|
||||
return state1
|
||||
|
||||
state2 = _get_draw_state_dict(session, eid2)
|
||||
if "error" in state2:
|
||||
return state2
|
||||
|
||||
raw_diff = _diff_dicts(state1, state2)
|
||||
differences = _add_implications(raw_diff)
|
||||
|
||||
return {
|
||||
"eid1": eid1,
|
||||
"eid2": eid2,
|
||||
"differences": differences,
|
||||
"identical": len(differences) == 0,
|
||||
"summary": (
|
||||
f"发现 {len(differences)} 处差异"
|
||||
if differences
|
||||
else "两个 draw call 的 pipeline state 完全相同"
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def analyze_render_passes() -> dict:
|
||||
"""Auto-detect render pass boundaries and summarize each pass.
|
||||
|
||||
Detects passes by Clear actions and output target changes.
|
||||
Returns a list of render passes with draw count, RT info, and event range.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
sf = session.structured_file
|
||||
passes: list = []
|
||||
current_pass: Optional[dict] = None
|
||||
last_outputs: Optional[tuple] = None
|
||||
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
is_clear = bool(action.flags & rd.ActionFlags.Clear)
|
||||
is_draw = bool(action.flags & rd.ActionFlags.Drawcall)
|
||||
|
||||
if not is_clear and not is_draw:
|
||||
continue
|
||||
|
||||
outputs = tuple(str(o) for o in action.outputs if int(o) != 0)
|
||||
|
||||
new_pass = False
|
||||
if is_clear:
|
||||
new_pass = True
|
||||
elif outputs and outputs != last_outputs:
|
||||
new_pass = True
|
||||
|
||||
if new_pass:
|
||||
if current_pass is not None:
|
||||
passes.append(current_pass)
|
||||
current_pass = {
|
||||
"pass_index": len(passes),
|
||||
"start_event": eid,
|
||||
"end_event": eid,
|
||||
"start_action": action.GetName(sf),
|
||||
"draw_count": 0,
|
||||
"clear_count": 0,
|
||||
"render_targets": list(outputs) if outputs else [],
|
||||
}
|
||||
|
||||
if current_pass is None:
|
||||
current_pass = {
|
||||
"pass_index": 0,
|
||||
"start_event": eid,
|
||||
"end_event": eid,
|
||||
"start_action": action.GetName(sf),
|
||||
"draw_count": 0,
|
||||
"clear_count": 0,
|
||||
"render_targets": list(outputs) if outputs else [],
|
||||
}
|
||||
|
||||
current_pass["end_event"] = eid
|
||||
if is_draw:
|
||||
current_pass["draw_count"] += 1
|
||||
if is_clear:
|
||||
current_pass["clear_count"] += 1
|
||||
|
||||
if outputs:
|
||||
last_outputs = outputs
|
||||
for o in outputs:
|
||||
if o not in current_pass["render_targets"]:
|
||||
current_pass["render_targets"].append(o)
|
||||
|
||||
if current_pass is not None:
|
||||
passes.append(current_pass)
|
||||
|
||||
for p in passes:
|
||||
rt_info = []
|
||||
for rid_str in p["render_targets"]:
|
||||
entry: dict = {"resource_id": rid_str}
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
if tex_desc is not None:
|
||||
entry["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
entry["format"] = str(tex_desc.format.Name())
|
||||
rt_info.append(entry)
|
||||
p["render_target_info"] = rt_info
|
||||
|
||||
return {
|
||||
"passes": passes,
|
||||
"total_passes": len(passes),
|
||||
}
|
||||
|
||||
|
||||
def _diff_dicts(d1: dict, d2: dict, path: str = "") -> dict:
|
||||
diff: dict = {}
|
||||
all_keys = set(d1.keys()) | set(d2.keys())
|
||||
|
||||
for key in all_keys:
|
||||
key_path = f"{path}.{key}" if path else key
|
||||
v1 = d1.get(key)
|
||||
v2 = d2.get(key)
|
||||
|
||||
if v1 == v2:
|
||||
continue
|
||||
|
||||
if isinstance(v1, dict) and isinstance(v2, dict):
|
||||
sub = _diff_dicts(v1, v2, key_path)
|
||||
if sub:
|
||||
diff[key] = sub
|
||||
elif isinstance(v1, list) and isinstance(v2, list):
|
||||
if v1 != v2:
|
||||
diff[key] = {"eid1": v1, "eid2": v2}
|
||||
else:
|
||||
diff[key] = {"eid1": v1, "eid2": v2}
|
||||
|
||||
return diff
|
||||
|
||||
|
||||
def _implication_for(suffix: str, v1, v2) -> Optional[str]:
|
||||
s1, s2 = str(v1), str(v2)
|
||||
if suffix == "blend.enabled":
|
||||
detail = (
|
||||
"透明度混合激活,颜色会与背景混合叠加"
|
||||
if v2
|
||||
else "关闭 blend,输出将直接覆盖目标像素(不透明模式)"
|
||||
)
|
||||
return f"Blend toggle: {s1}→{s2}。{detail}"
|
||||
if suffix == "blend.color_src":
|
||||
return f"颜色源混合因子: {s1}→{s2}。可能导致输出颜色亮度/透明度变化"
|
||||
if suffix == "blend.color_dst":
|
||||
return f"颜色目标混合因子: {s1}→{s2}。影响背景色在最终结果中的权重"
|
||||
if suffix == "blend.color_op":
|
||||
return f"颜色混合运算: {s1}→{s2}。运算方式改变可能导致颜色整体偏移"
|
||||
if suffix == "blend.alpha_src":
|
||||
return f"Alpha 源混合因子: {s1}→{s2}"
|
||||
if suffix == "depth.test":
|
||||
detail = (
|
||||
"关闭后物体可能穿透其他几何体"
|
||||
if not v2
|
||||
else "开启后需要确保深度 buffer 正确初始化"
|
||||
)
|
||||
return f"深度测试: {s1}→{s2}。{detail}"
|
||||
if suffix == "depth.write":
|
||||
detail = (
|
||||
"关闭后该 draw call 不更新深度 buffer,适用于透明物体"
|
||||
if not v2
|
||||
else "开启后会更新深度值"
|
||||
)
|
||||
return f"深度写入: {s1}→{s2}。{detail}"
|
||||
if suffix == "depth.func":
|
||||
return f"深度比较函数: {s1}→{s2}。可能导致物体遮挡关系或消失问题"
|
||||
if suffix == "stencil.enabled":
|
||||
return f"模板测试: {s1}→{s2}"
|
||||
if suffix == "rasterizer.cull":
|
||||
return (
|
||||
f"剔除模式: {s1}→{s2}。可能影响背面/正面可见性,倒影 pass 通常需要反转 cull"
|
||||
)
|
||||
if suffix == "rasterizer.front_ccw":
|
||||
return f"正面朝向: {s1}→{s2}。CCW/CW 切换后背面剔除方向翻转"
|
||||
if suffix == "topology":
|
||||
return f"图元拓扑: {s1}→{s2}"
|
||||
return None
|
||||
|
||||
|
||||
_IMPLICATION_SUFFIXES = [
|
||||
"blend.enabled",
|
||||
"blend.color_src",
|
||||
"blend.color_dst",
|
||||
"blend.color_op",
|
||||
"blend.alpha_src",
|
||||
"depth.test",
|
||||
"depth.write",
|
||||
"depth.func",
|
||||
"stencil.enabled",
|
||||
"rasterizer.cull",
|
||||
"rasterizer.front_ccw",
|
||||
"topology",
|
||||
]
|
||||
|
||||
|
||||
def _add_implications(diff: dict) -> list:
|
||||
results: list = []
|
||||
|
||||
def _flatten(d: dict, path: str):
|
||||
for key, val in d.items():
|
||||
key_path = f"{path}.{key}" if path else key
|
||||
if isinstance(val, dict):
|
||||
if "eid1" in val or "eid2" in val:
|
||||
v1 = val.get("eid1")
|
||||
v2 = val.get("eid2")
|
||||
entry: dict = {"field": key_path, "eid1": v1, "eid2": v2}
|
||||
for suffix in _IMPLICATION_SUFFIXES:
|
||||
if key_path.endswith(suffix):
|
||||
imp = _implication_for(suffix, v1, v2)
|
||||
if imp:
|
||||
entry["implication"] = imp
|
||||
break
|
||||
results.append(entry)
|
||||
else:
|
||||
_flatten(val, key_path)
|
||||
|
||||
_flatten(diff, "")
|
||||
return results
|
||||
733
engine/tools/renderdoc_parser/tools/data_tools.py
Normal file
733
engine/tools/renderdoc_parser/tools/data_tools.py
Normal file
@@ -0,0 +1,733 @@
|
||||
"""Data extraction tools: save_texture, get_buffer_data, pick_pixel, get_texture_stats, export_draw_textures, save_render_target, export_mesh."""
|
||||
|
||||
|
||||
import os
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
enum_str,
|
||||
FILE_TYPE_MAP,
|
||||
SHADER_STAGE_MAP,
|
||||
MESH_DATA_STAGE_MAP,
|
||||
TOPOLOGY_MAP,
|
||||
)
|
||||
|
||||
|
||||
MAX_BUFFER_READ = 65536
|
||||
|
||||
|
||||
def save_texture(
|
||||
resource_id: str,
|
||||
output_path: str,
|
||||
file_type: str = "png",
|
||||
mip: int = 0,
|
||||
event_id: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Save a texture resource to an image file.
|
||||
|
||||
Args:
|
||||
resource_id: The texture resource ID string.
|
||||
output_path: Absolute path for the output file.
|
||||
file_type: Output format: png, jpg, bmp, tga, hdr, exr, dds (default: png).
|
||||
mip: Mip level to save (default 0). Use -1 for all mips (DDS only).
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
tex_id = session.resolve_resource_id(resource_id)
|
||||
if tex_id is None:
|
||||
return make_error(
|
||||
f"Texture resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
ft = FILE_TYPE_MAP.get(file_type.lower())
|
||||
if ft is None:
|
||||
return make_error(
|
||||
f"Unknown file type: {file_type}. Valid: {list(FILE_TYPE_MAP.keys())}",
|
||||
"API_ERROR",
|
||||
)
|
||||
|
||||
output_path = os.path.normpath(output_path)
|
||||
os.makedirs(os.path.dirname(output_path) or ".", exist_ok=True)
|
||||
|
||||
texsave = rd.TextureSave()
|
||||
texsave.resourceId = tex_id
|
||||
texsave.destType = ft
|
||||
texsave.mip = mip
|
||||
texsave.slice.sliceIndex = 0
|
||||
texsave.alpha = rd.AlphaMapping.Preserve
|
||||
|
||||
session.controller.SaveTexture(texsave, output_path)
|
||||
|
||||
return {
|
||||
"status": "saved",
|
||||
"output_path": output_path,
|
||||
"resource_id": resource_id,
|
||||
"file_type": file_type,
|
||||
"mip": mip,
|
||||
}
|
||||
|
||||
|
||||
def get_buffer_data(
|
||||
resource_id: str,
|
||||
offset: int = 0,
|
||||
length: int = 256,
|
||||
format: str = "hex",
|
||||
) -> dict:
|
||||
"""Read raw data from a buffer resource.
|
||||
|
||||
Args:
|
||||
resource_id: The buffer resource ID string.
|
||||
offset: Byte offset to start reading from (default 0).
|
||||
length: Number of bytes to read (default 256, max 65536).
|
||||
format: Output format: "hex" for hex dump, "floats" to interpret as float32 array.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
buf_id = session.resolve_resource_id(resource_id)
|
||||
if buf_id is None:
|
||||
return make_error(
|
||||
f"Buffer resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
length = min(length, MAX_BUFFER_READ)
|
||||
data = session.controller.GetBufferData(buf_id, offset, length)
|
||||
|
||||
result: dict = {
|
||||
"resource_id": resource_id,
|
||||
"offset": offset,
|
||||
"bytes_read": len(data),
|
||||
}
|
||||
|
||||
if format == "floats":
|
||||
import struct
|
||||
|
||||
num_floats = len(data) // 4
|
||||
floats = list(struct.unpack_from(f"{num_floats}f", data))
|
||||
result["data"] = [round(f, 6) for f in floats]
|
||||
result["format"] = "float32"
|
||||
else:
|
||||
lines = []
|
||||
for i in range(0, len(data), 16):
|
||||
chunk = data[i : i + 16]
|
||||
hex_part = " ".join(f"{b:02x}" for b in chunk)
|
||||
lines.append(f"{offset + i:08x}: {hex_part}")
|
||||
result["data"] = "\n".join(lines)
|
||||
result["format"] = "hex"
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def pick_pixel(
|
||||
resource_id: str,
|
||||
x: int,
|
||||
y: int,
|
||||
event_id: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Get the RGBA value of a specific pixel in a texture.
|
||||
|
||||
Args:
|
||||
resource_id: The texture resource ID string.
|
||||
x: X coordinate of the pixel.
|
||||
y: Y coordinate of the pixel.
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
tex_id = session.resolve_resource_id(resource_id)
|
||||
if tex_id is None:
|
||||
return make_error(
|
||||
f"Texture resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
val = session.controller.PickPixel(
|
||||
tex_id, x, y, rd.Subresource(0, 0, 0), rd.CompType.Typeless
|
||||
)
|
||||
|
||||
return {
|
||||
"resource_id": resource_id,
|
||||
"x": x,
|
||||
"y": y,
|
||||
"rgba": {
|
||||
"r": val.floatValue[0],
|
||||
"g": val.floatValue[1],
|
||||
"b": val.floatValue[2],
|
||||
"a": val.floatValue[3],
|
||||
},
|
||||
"rgba_uint": {
|
||||
"r": val.uintValue[0],
|
||||
"g": val.uintValue[1],
|
||||
"b": val.uintValue[2],
|
||||
"a": val.uintValue[3],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def get_texture_stats(
|
||||
resource_id: str,
|
||||
event_id: Optional[int] = None,
|
||||
all_slices: bool = False,
|
||||
) -> dict:
|
||||
"""Get min/max/average statistics for a texture at the current event.
|
||||
|
||||
Args:
|
||||
resource_id: The texture resource ID string.
|
||||
event_id: Optional event ID to navigate to first.
|
||||
all_slices: If True, return per-slice/per-face statistics. Useful for
|
||||
cubemaps (returns stats for each of the 6 faces) and texture arrays.
|
||||
Default False returns only slice 0.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
tex_id = session.resolve_resource_id(resource_id)
|
||||
if tex_id is None:
|
||||
return make_error(
|
||||
f"Texture resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
tex_desc = session.get_texture_desc(resource_id)
|
||||
|
||||
def _minmax_to_dict(mm):
|
||||
mn, mx = mm[0], mm[1]
|
||||
r_min, g_min, b_min, a_min = (
|
||||
mn.floatValue[0],
|
||||
mn.floatValue[1],
|
||||
mn.floatValue[2],
|
||||
mn.floatValue[3],
|
||||
)
|
||||
r_max, g_max, b_max, a_max = (
|
||||
mx.floatValue[0],
|
||||
mx.floatValue[1],
|
||||
mx.floatValue[2],
|
||||
mx.floatValue[3],
|
||||
)
|
||||
has_neg = any(v < 0 for v in [r_min, g_min, b_min])
|
||||
import math
|
||||
|
||||
has_inf = any(math.isinf(v) for v in [r_max, g_max, b_max])
|
||||
has_nan = any(math.isnan(v) for v in [r_min, g_min, b_min, r_max, g_max, b_max])
|
||||
d: dict = {
|
||||
"min": {"r": r_min, "g": g_min, "b": b_min, "a": a_min},
|
||||
"max": {"r": r_max, "g": g_max, "b": b_max, "a": a_max},
|
||||
"has_negative": has_neg,
|
||||
"has_inf": has_inf,
|
||||
"has_nan": has_nan,
|
||||
}
|
||||
warnings = []
|
||||
if has_nan:
|
||||
warnings.append("⚠️ 检测到 NaN 值")
|
||||
if has_inf:
|
||||
warnings.append("⚠️ 检测到 Inf 值")
|
||||
if has_neg:
|
||||
warnings.append("⚠️ 检测到负值")
|
||||
if warnings:
|
||||
d["warnings"] = warnings
|
||||
return d
|
||||
|
||||
result: dict = {"resource_id": resource_id}
|
||||
if tex_desc is not None:
|
||||
result["name"] = getattr(tex_desc, "name", "")
|
||||
result["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
result["format"] = str(tex_desc.format.Name())
|
||||
result["mips"] = tex_desc.mips
|
||||
result["array_size"] = tex_desc.arraysize
|
||||
|
||||
_FACE_NAMES = ["+X", "-X", "+Y", "-Y", "+Z", "-Z"]
|
||||
|
||||
if all_slices and tex_desc is not None:
|
||||
num_slices = max(tex_desc.arraysize, 1)
|
||||
is_cubemap = num_slices == 6 or (
|
||||
hasattr(tex_desc, "dimension") and "Cube" in str(tex_desc.dimension)
|
||||
)
|
||||
per_slice = []
|
||||
for s in range(num_slices):
|
||||
try:
|
||||
mm = session.controller.GetMinMax(
|
||||
tex_id, rd.Subresource(0, s, 0), rd.CompType.Typeless
|
||||
)
|
||||
entry = _minmax_to_dict(mm)
|
||||
if is_cubemap and s < 6:
|
||||
entry["face"] = _FACE_NAMES[s]
|
||||
else:
|
||||
entry["slice"] = s
|
||||
per_slice.append(entry)
|
||||
except Exception as e:
|
||||
per_slice.append({"slice": s, "error": str(e)})
|
||||
result["per_slice_stats"] = per_slice
|
||||
else:
|
||||
mm = session.controller.GetMinMax(
|
||||
tex_id, rd.Subresource(0, 0, 0), rd.CompType.Typeless
|
||||
)
|
||||
result.update(_minmax_to_dict(mm))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def read_texture_pixels(
|
||||
resource_id: str,
|
||||
x: int,
|
||||
y: int,
|
||||
width: int,
|
||||
height: int,
|
||||
mip_level: int = 0,
|
||||
array_slice: int = 0,
|
||||
event_id: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Read actual pixel values from a rectangular region of a texture.
|
||||
|
||||
Returns float RGBA values for each pixel. Region is capped at 64x64.
|
||||
Useful for precisely checking IBL cubemap faces, LUT textures, history buffers, etc.
|
||||
|
||||
Args:
|
||||
resource_id: The texture resource ID string.
|
||||
x: Top-left X coordinate of the region.
|
||||
y: Top-left Y coordinate of the region.
|
||||
width: Region width (max 64).
|
||||
height: Region height (max 64).
|
||||
mip_level: Mip level to read (default 0).
|
||||
array_slice: Array slice or cubemap face index (default 0).
|
||||
Cubemap face order: 0=+X 1=-X 2=+Y 3=-Y 4=+Z 5=-Z.
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
import struct as _struct
|
||||
import math as _math
|
||||
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
tex_id = session.resolve_resource_id(resource_id)
|
||||
if tex_id is None:
|
||||
return make_error(
|
||||
f"Texture resource '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
width = min(width, 64)
|
||||
height = min(height, 64)
|
||||
|
||||
pixels: list = []
|
||||
anomalies: list = []
|
||||
|
||||
for py in range(y, y + height):
|
||||
row: list = []
|
||||
for px in range(x, x + width):
|
||||
try:
|
||||
val = session.controller.PickPixel(
|
||||
tex_id,
|
||||
px,
|
||||
py,
|
||||
rd.Subresource(mip_level, array_slice, 0),
|
||||
rd.CompType.Typeless,
|
||||
)
|
||||
r, g, b, a = (
|
||||
val.floatValue[0],
|
||||
val.floatValue[1],
|
||||
val.floatValue[2],
|
||||
val.floatValue[3],
|
||||
)
|
||||
pixel = [round(r, 6), round(g, 6), round(b, 6), round(a, 6)]
|
||||
row.append(pixel)
|
||||
|
||||
for ch_idx, ch_val in enumerate(pixel[:3]):
|
||||
ch_name = "rgba"[ch_idx]
|
||||
if _math.isnan(ch_val):
|
||||
anomalies.append(
|
||||
{"x": px, "y": py, "channel": ch_name, "type": "NaN"}
|
||||
)
|
||||
elif _math.isinf(ch_val):
|
||||
anomalies.append(
|
||||
{
|
||||
"x": px,
|
||||
"y": py,
|
||||
"channel": ch_name,
|
||||
"type": "Inf",
|
||||
"value": str(ch_val),
|
||||
}
|
||||
)
|
||||
elif ch_val < 0:
|
||||
anomalies.append(
|
||||
{
|
||||
"x": px,
|
||||
"y": py,
|
||||
"channel": ch_name,
|
||||
"type": "negative",
|
||||
"value": round(ch_val, 6),
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
row.append({"error": str(e)})
|
||||
pixels.append(row)
|
||||
|
||||
result: dict = {
|
||||
"resource_id": resource_id,
|
||||
"region": {"x": x, "y": y, "width": width, "height": height},
|
||||
"mip_level": mip_level,
|
||||
"array_slice": array_slice,
|
||||
"pixels": pixels,
|
||||
}
|
||||
if anomalies:
|
||||
result["anomalies"] = anomalies
|
||||
result["anomaly_count"] = len(anomalies)
|
||||
return result
|
||||
|
||||
|
||||
def export_draw_textures(
|
||||
event_id: int,
|
||||
output_dir: str,
|
||||
skip_small: bool = True,
|
||||
) -> dict:
|
||||
"""Batch export all textures bound to a draw call's pixel shader.
|
||||
|
||||
Args:
|
||||
event_id: The event ID of the draw call.
|
||||
output_dir: Directory to save exported textures.
|
||||
skip_small: Skip textures 4x4 or smaller (placeholder textures). Default True.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.set_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
ps_refl = state.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
if ps_refl is None:
|
||||
return make_error("No pixel shader bound at this event", "API_ERROR")
|
||||
|
||||
output_dir = os.path.normpath(output_dir)
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
exported = []
|
||||
skipped = []
|
||||
try:
|
||||
all_ro = state.GetReadOnlyResources(rd.ShaderStage.Pixel)
|
||||
ro_by_index: dict = {}
|
||||
for b in all_ro:
|
||||
ro_by_index.setdefault(b.access.index, []).append(b)
|
||||
except Exception:
|
||||
ro_by_index = {}
|
||||
|
||||
for i, ro_refl in enumerate(ps_refl.readOnlyResources):
|
||||
for b in ro_by_index.get(i, []):
|
||||
rid_str = str(b.descriptor.resource)
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
if tex_desc is None:
|
||||
continue
|
||||
|
||||
if skip_small and tex_desc.width <= 4 and tex_desc.height <= 4:
|
||||
skipped.append(
|
||||
{
|
||||
"name": ro_refl.name,
|
||||
"resource_id": rid_str,
|
||||
"size": f"{tex_desc.width}x{tex_desc.height}",
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
filename = f"{ro_refl.name}_{tex_desc.width}x{tex_desc.height}.png"
|
||||
filename = filename.replace("/", "_").replace("\\", "_")
|
||||
out_path = os.path.join(output_dir, filename)
|
||||
|
||||
texsave = rd.TextureSave()
|
||||
texsave.resourceId = tex_desc.resourceId
|
||||
texsave.destType = rd.FileType.PNG
|
||||
texsave.mip = 0
|
||||
texsave.slice.sliceIndex = 0
|
||||
texsave.alpha = rd.AlphaMapping.Preserve
|
||||
session.controller.SaveTexture(texsave, out_path)
|
||||
|
||||
exported.append(
|
||||
{
|
||||
"name": ro_refl.name,
|
||||
"resource_id": rid_str,
|
||||
"size": f"{tex_desc.width}x{tex_desc.height}",
|
||||
"output_path": out_path,
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"event_id": event_id,
|
||||
"exported": exported,
|
||||
"exported_count": len(exported),
|
||||
"skipped": skipped,
|
||||
"skipped_count": len(skipped),
|
||||
}
|
||||
|
||||
|
||||
def save_render_target(
|
||||
event_id: int,
|
||||
output_path: str,
|
||||
save_depth: bool = False,
|
||||
) -> dict:
|
||||
"""Save the current render target(s) at a specific event.
|
||||
|
||||
Args:
|
||||
event_id: The event ID to capture the render target from.
|
||||
output_path: Output file path or directory. If directory, auto-names the file.
|
||||
save_depth: Also save the depth target (default False).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.set_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
output_path = os.path.normpath(output_path)
|
||||
saved = []
|
||||
|
||||
outputs = state.GetOutputTargets()
|
||||
color_target = None
|
||||
for o in outputs:
|
||||
if int(o.resource) != 0:
|
||||
color_target = o
|
||||
break
|
||||
|
||||
if color_target is None:
|
||||
return make_error("No color render target bound at this event", "API_ERROR")
|
||||
|
||||
rid_str = str(color_target.resource)
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
|
||||
if os.path.isdir(output_path):
|
||||
fname = f"rt_color_eid{event_id}.png"
|
||||
color_path = os.path.join(output_path, fname)
|
||||
else:
|
||||
color_path = output_path
|
||||
os.makedirs(os.path.dirname(color_path) or ".", exist_ok=True)
|
||||
|
||||
texsave = rd.TextureSave()
|
||||
texsave.resourceId = color_target.resource
|
||||
texsave.destType = rd.FileType.PNG
|
||||
texsave.mip = 0
|
||||
texsave.slice.sliceIndex = 0
|
||||
texsave.alpha = rd.AlphaMapping.Preserve
|
||||
session.controller.SaveTexture(texsave, color_path)
|
||||
|
||||
color_info: dict = {
|
||||
"type": "color",
|
||||
"resource_id": rid_str,
|
||||
"output_path": color_path,
|
||||
}
|
||||
if tex_desc is not None:
|
||||
color_info["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
color_info["format"] = str(tex_desc.format.Name())
|
||||
saved.append(color_info)
|
||||
|
||||
if save_depth:
|
||||
try:
|
||||
dt = state.GetDepthTarget()
|
||||
if int(dt.resource) != 0:
|
||||
dt_rid = str(dt.resource)
|
||||
if os.path.isdir(output_path):
|
||||
depth_path = os.path.join(
|
||||
output_path, f"rt_depth_eid{event_id}.png"
|
||||
)
|
||||
else:
|
||||
base, ext = os.path.splitext(color_path)
|
||||
depth_path = f"{base}_depth{ext}"
|
||||
|
||||
texsave = rd.TextureSave()
|
||||
texsave.resourceId = dt.resource
|
||||
texsave.destType = rd.FileType.PNG
|
||||
texsave.mip = 0
|
||||
texsave.slice.sliceIndex = 0
|
||||
texsave.alpha = rd.AlphaMapping.Preserve
|
||||
session.controller.SaveTexture(texsave, depth_path)
|
||||
|
||||
depth_info: dict = {
|
||||
"type": "depth",
|
||||
"resource_id": dt_rid,
|
||||
"output_path": depth_path,
|
||||
}
|
||||
dt_desc = session.get_texture_desc(dt_rid)
|
||||
if dt_desc is not None:
|
||||
depth_info["size"] = f"{dt_desc.width}x{dt_desc.height}"
|
||||
depth_info["format"] = str(dt_desc.format.Name())
|
||||
saved.append(depth_info)
|
||||
except Exception as exc:
|
||||
saved.append({"type": "depth", "error": f"Failed to save depth: {exc}"})
|
||||
|
||||
return {"event_id": event_id, "saved": saved, "count": len(saved)}
|
||||
|
||||
|
||||
def export_mesh(
|
||||
event_id: int,
|
||||
output_path: str,
|
||||
) -> dict:
|
||||
"""Export mesh data from a draw call as OBJ format.
|
||||
|
||||
Uses post-vertex-shader data to get transformed positions, normals, and UVs.
|
||||
|
||||
Args:
|
||||
event_id: The event ID of the draw call.
|
||||
output_path: Output file path for the .obj file.
|
||||
"""
|
||||
import struct as _struct
|
||||
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.set_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
postvs = session.controller.GetPostVSData(0, 0, rd.MeshDataStage.VSOut)
|
||||
if postvs.vertexResourceId == rd.ResourceId.Null():
|
||||
return make_error("No post-VS data available for this event", "API_ERROR")
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
vs_refl = state.GetShaderReflection(rd.ShaderStage.Vertex)
|
||||
if vs_refl is None:
|
||||
return make_error("No vertex shader bound", "API_ERROR")
|
||||
|
||||
out_sig = vs_refl.outputSignature
|
||||
pos_idx = None
|
||||
norm_idx = None
|
||||
uv_idx = None
|
||||
float_offset = 0
|
||||
for sig in out_sig:
|
||||
name = (sig.semanticName or sig.varName or "").upper()
|
||||
if "POSITION" in name:
|
||||
pos_idx = float_offset
|
||||
elif "NORMAL" in name:
|
||||
norm_idx = float_offset
|
||||
elif "TEXCOORD" in name and uv_idx is None:
|
||||
uv_idx = float_offset
|
||||
float_offset += sig.compCount
|
||||
|
||||
if pos_idx is None:
|
||||
pos_idx = 0
|
||||
|
||||
num_verts = postvs.numIndices
|
||||
data = session.controller.GetBufferData(
|
||||
postvs.vertexResourceId,
|
||||
postvs.vertexByteOffset,
|
||||
num_verts * postvs.vertexByteStride,
|
||||
)
|
||||
|
||||
floats_per_vertex = postvs.vertexByteStride // 4
|
||||
if floats_per_vertex == 0:
|
||||
return make_error(
|
||||
f"Invalid vertex stride ({postvs.vertexByteStride} bytes), cannot parse mesh data",
|
||||
"API_ERROR",
|
||||
)
|
||||
|
||||
positions = []
|
||||
normals = []
|
||||
uvs = []
|
||||
for i in range(num_verts):
|
||||
off = i * postvs.vertexByteStride
|
||||
if off + postvs.vertexByteStride > len(data):
|
||||
break
|
||||
vfloats = list(_struct.unpack_from(f"{floats_per_vertex}f", data, off))
|
||||
|
||||
if pos_idx is not None and pos_idx + 3 <= len(vfloats):
|
||||
positions.append(
|
||||
(vfloats[pos_idx], vfloats[pos_idx + 1], vfloats[pos_idx + 2])
|
||||
)
|
||||
else:
|
||||
positions.append((0.0, 0.0, 0.0))
|
||||
|
||||
if norm_idx is not None and norm_idx + 3 <= len(vfloats):
|
||||
normals.append(
|
||||
(vfloats[norm_idx], vfloats[norm_idx + 1], vfloats[norm_idx + 2])
|
||||
)
|
||||
|
||||
if uv_idx is not None and uv_idx + 2 <= len(vfloats):
|
||||
uvs.append((vfloats[uv_idx], vfloats[uv_idx + 1]))
|
||||
|
||||
output_path = os.path.normpath(output_path)
|
||||
os.makedirs(os.path.dirname(output_path) or ".", exist_ok=True)
|
||||
|
||||
try:
|
||||
topo = state.GetPrimitiveTopology()
|
||||
topo_name = enum_str(topo, TOPOLOGY_MAP, "Topology.")
|
||||
except Exception:
|
||||
topo_name = "TriangleList"
|
||||
|
||||
lines = [f"# Exported from RenderDoc MCP - event {event_id}"]
|
||||
lines.append(f"# Vertices: {len(positions)}, Topology: {topo_name}")
|
||||
|
||||
for p in positions:
|
||||
lines.append(f"v {p[0]:.6f} {p[1]:.6f} {p[2]:.6f}")
|
||||
|
||||
for n in normals:
|
||||
lines.append(f"vn {n[0]:.6f} {n[1]:.6f} {n[2]:.6f}")
|
||||
|
||||
for uv in uvs:
|
||||
lines.append(f"vt {uv[0]:.6f} {uv[1]:.6f}")
|
||||
|
||||
has_normals = len(normals) == len(positions)
|
||||
has_uvs = len(uvs) == len(positions)
|
||||
triangles: list = []
|
||||
|
||||
if topo_name == "TriangleStrip":
|
||||
for i in range(len(positions) - 2):
|
||||
if i % 2 == 0:
|
||||
triangles.append((i, i + 1, i + 2))
|
||||
else:
|
||||
triangles.append((i, i + 2, i + 1))
|
||||
elif topo_name == "TriangleFan":
|
||||
for i in range(1, len(positions) - 1):
|
||||
triangles.append((0, i, i + 1))
|
||||
else:
|
||||
for i in range(0, len(positions) - 2, 3):
|
||||
triangles.append((i, i + 1, i + 2))
|
||||
|
||||
for t in triangles:
|
||||
i1, i2, i3 = t[0] + 1, t[1] + 1, t[2] + 1
|
||||
if has_normals and has_uvs:
|
||||
lines.append(f"f {i1}/{i1}/{i1} {i2}/{i2}/{i2} {i3}/{i3}/{i3}")
|
||||
elif has_normals:
|
||||
lines.append(f"f {i1}//{i1} {i2}//{i2} {i3}//{i3}")
|
||||
elif has_uvs:
|
||||
lines.append(f"f {i1}/{i1} {i2}/{i2} {i3}/{i3}")
|
||||
else:
|
||||
lines.append(f"f {i1} {i2} {i3}")
|
||||
|
||||
with open(output_path, "w") as f:
|
||||
f.write("\n".join(lines) + "\n")
|
||||
|
||||
return {
|
||||
"event_id": event_id,
|
||||
"output_path": output_path,
|
||||
"topology": topo_name,
|
||||
"vertices": len(positions),
|
||||
"normals": len(normals),
|
||||
"uvs": len(uvs),
|
||||
"triangles": len(triangles),
|
||||
}
|
||||
791
engine/tools/renderdoc_parser/tools/diagnostic_tools.py
Normal file
791
engine/tools/renderdoc_parser/tools/diagnostic_tools.py
Normal file
@@ -0,0 +1,791 @@
|
||||
"""Intelligent diagnostic tools: diagnose_negative_values, diagnose_precision_issues,
|
||||
diagnose_reflection_mismatch, diagnose_mobile_risks."""
|
||||
|
||||
import math
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
flags_to_list,
|
||||
SHADER_STAGE_MAP,
|
||||
enum_str,
|
||||
COMPARE_FUNC_MAP,
|
||||
BLEND_FACTOR_MAP,
|
||||
CULL_MODE_MAP,
|
||||
)
|
||||
|
||||
|
||||
def _pick_rgba(session, tex_id, x: int, y: int, slice_idx: int = 0):
|
||||
"""Pick pixel and return (r, g, b, a) floats. Returns None on failure."""
|
||||
try:
|
||||
v = session.controller.PickPixel(
|
||||
tex_id, x, y, rd.Subresource(0, slice_idx, 0), rd.CompType.Typeless
|
||||
)
|
||||
return v.floatValue[0], v.floatValue[1], v.floatValue[2], v.floatValue[3]
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _sample_rt_for_negatives(session, tex_id, tex_desc, n_samples: int = 128):
|
||||
"""Sample a render target for negative/NaN/Inf pixels. Returns summary dict."""
|
||||
import math as _m
|
||||
|
||||
w, h = tex_desc.width, tex_desc.height
|
||||
cols = max(1, int(_m.sqrt(n_samples * w / max(h, 1))))
|
||||
rows = max(1, n_samples // cols)
|
||||
sx, sy = max(1, w // cols), max(1, h // rows)
|
||||
|
||||
neg_pixels: list = []
|
||||
inf_pixels: list = []
|
||||
nan_pixels: list = []
|
||||
total = 0
|
||||
|
||||
for r in range(rows):
|
||||
for c in range(cols):
|
||||
px, py = c * sx + sx // 2, r * sy + sy // 2
|
||||
if px >= w or py >= h:
|
||||
continue
|
||||
total += 1
|
||||
rgba = _pick_rgba(session, tex_id, px, py)
|
||||
if rgba is None:
|
||||
continue
|
||||
rv, gv, bv, av = rgba
|
||||
if any(_m.isnan(v) for v in [rv, gv, bv]):
|
||||
nan_pixels.append({"x": px, "y": py, "value": [rv, gv, bv]})
|
||||
elif any(_m.isinf(v) for v in [rv, gv, bv]):
|
||||
inf_pixels.append(
|
||||
{
|
||||
"x": px,
|
||||
"y": py,
|
||||
"value": [
|
||||
"+Inf"
|
||||
if _m.isinf(v) and v > 0
|
||||
else ("-Inf" if _m.isinf(v) else round(v, 5))
|
||||
for v in [rv, gv, bv]
|
||||
],
|
||||
}
|
||||
)
|
||||
elif any(v < 0 for v in [rv, gv, bv]):
|
||||
neg_pixels.append(
|
||||
{"x": px, "y": py, "value": [round(v, 6) for v in [rv, gv, bv]]}
|
||||
)
|
||||
|
||||
return {
|
||||
"total_samples": total,
|
||||
"negative_count": len(neg_pixels),
|
||||
"inf_count": len(inf_pixels),
|
||||
"nan_count": len(nan_pixels),
|
||||
"neg_samples": neg_pixels[:5],
|
||||
"inf_samples": inf_pixels[:5],
|
||||
"nan_samples": nan_pixels[:5],
|
||||
}
|
||||
|
||||
|
||||
def diagnose_negative_values(
|
||||
check_targets: Optional[list] = None,
|
||||
trace_depth: int = 5,
|
||||
) -> dict:
|
||||
"""Scan all float render targets for negative/NaN/Inf pixels and trace their origin.
|
||||
|
||||
Automatically identifies which draw calls first introduce negative values,
|
||||
checks if TAA/temporal buffers are amplifying them,
|
||||
and provides root cause candidates with fix suggestions.
|
||||
|
||||
Args:
|
||||
check_targets: Optional list of render target resource IDs to check.
|
||||
If omitted, checks all floating-point render targets.
|
||||
trace_depth: How many events to scan backward when tracing the first
|
||||
event that introduced negative values (default 5).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
textures = session.controller.GetTextures()
|
||||
float_rts: list = []
|
||||
for tex in textures:
|
||||
fmt = str(tex.format.Name()).upper()
|
||||
is_float = any(t in fmt for t in ["FLOAT", "R16", "R32", "R11G11B10"])
|
||||
is_rt = bool(tex.creationFlags & getattr(rd.TextureCategory, "ColorTarget", 2))
|
||||
if not is_float:
|
||||
continue
|
||||
rid_str = str(tex.resourceId)
|
||||
if check_targets and rid_str not in check_targets:
|
||||
continue
|
||||
if is_rt or check_targets:
|
||||
float_rts.append(tex)
|
||||
|
||||
if not float_rts:
|
||||
float_rts = [
|
||||
t
|
||||
for t in textures
|
||||
if any(x in str(t.format.Name()).upper() for x in ["FLOAT", "R16G16B16A16"])
|
||||
]
|
||||
|
||||
sf = session.structured_file
|
||||
affected: list = []
|
||||
|
||||
rt_draw_map = {}
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
for o in action.outputs:
|
||||
rid_str = str(o)
|
||||
if int(o) != 0:
|
||||
rt_draw_map.setdefault(rid_str, []).append(eid)
|
||||
|
||||
for tex in float_rts[:8]:
|
||||
rid_str = str(tex.resourceId)
|
||||
draws_to_rt = rt_draw_map.get(rid_str, [])
|
||||
if not draws_to_rt:
|
||||
continue
|
||||
|
||||
last_eid = draws_to_rt[-1]
|
||||
err2 = session.set_event(last_eid)
|
||||
if err2:
|
||||
continue
|
||||
|
||||
scan = _sample_rt_for_negatives(session, tex.resourceId, tex, n_samples=200)
|
||||
has_anomaly = scan["negative_count"] + scan["inf_count"] + scan["nan_count"] > 0
|
||||
|
||||
if not has_anomaly:
|
||||
continue
|
||||
|
||||
entry: dict = {
|
||||
"resource_id": rid_str,
|
||||
"name": getattr(tex, "name", None) or rid_str,
|
||||
"format": str(tex.format.Name()),
|
||||
"size": f"{tex.width}x{tex.height}",
|
||||
"negative_count": scan["negative_count"],
|
||||
"inf_count": scan["inf_count"],
|
||||
"nan_count": scan["nan_count"],
|
||||
"sample_count": scan["total_samples"],
|
||||
"negative_rate": f"{scan['negative_count'] / max(scan['total_samples'], 1) * 100:.1f}%",
|
||||
}
|
||||
|
||||
first_intro: Optional[dict] = None
|
||||
for eid in draws_to_rt[:trace_depth]:
|
||||
err2 = session.set_event(eid)
|
||||
if err2:
|
||||
continue
|
||||
sub_scan = _sample_rt_for_negatives(
|
||||
session, tex.resourceId, tex, n_samples=100
|
||||
)
|
||||
if sub_scan["negative_count"] + sub_scan["inf_count"] > 0:
|
||||
first_intro = {
|
||||
"event_id": eid,
|
||||
"name": session.action_map[eid].GetName(sf),
|
||||
"new_negative_pixels": sub_scan["negative_count"],
|
||||
"new_inf_pixels": sub_scan["inf_count"],
|
||||
}
|
||||
if sub_scan["neg_samples"]:
|
||||
first_intro["sample"] = sub_scan["neg_samples"][0]
|
||||
break
|
||||
|
||||
if first_intro:
|
||||
entry["first_introduced_at"] = first_intro
|
||||
|
||||
if scan["negative_count"] > 10 and first_intro:
|
||||
upstream_neg = first_intro.get("new_negative_pixels", 0)
|
||||
if scan["negative_count"] > upstream_neg * 2 and upstream_neg > 0:
|
||||
ratio = scan["negative_count"] / upstream_neg
|
||||
entry["accumulation_warning"] = (
|
||||
f"⚠️ 负值数 ({scan['negative_count']}) 是上游首次引入值 ({upstream_neg}) 的 {ratio:.1f}x,"
|
||||
"疑似 TAA/temporal feedback 正在持续放大负值"
|
||||
)
|
||||
|
||||
affected.append(entry)
|
||||
|
||||
candidates: list = []
|
||||
for entry in affected:
|
||||
intro = entry.get("first_introduced_at")
|
||||
if intro:
|
||||
ename = intro.get("name", "")
|
||||
if any(k in ename.upper() for k in ["IBL", "SH", "REFLECTION", "LIGHTING"]):
|
||||
candidates.append(
|
||||
{
|
||||
"likelihood": "高",
|
||||
"cause": f"IBL/SH 光照计算在 event {intro['event_id']} 产生负值",
|
||||
"evidence": f"负值首次出现于: {ename}",
|
||||
"fix": "在 SH 采样后添加 max(0, result) clamp,或使用非负 SH 基函数",
|
||||
}
|
||||
)
|
||||
if any(k in ename.upper() for k in ["TAA", "TEMPORAL", "HISTORY"]):
|
||||
candidates.append(
|
||||
{
|
||||
"likelihood": "高",
|
||||
"cause": "TAA/temporal pass 放大了上游负值",
|
||||
"evidence": f"TAA pass ({ename}) 中负值数量放大",
|
||||
"fix": "TAA shader 中对 history 采样结果和 AABB neighborhood clamp 下界设为 0",
|
||||
}
|
||||
)
|
||||
if entry.get("accumulation_warning"):
|
||||
candidates.append(
|
||||
{
|
||||
"likelihood": "高",
|
||||
"cause": "Temporal feedback 累积放大负值",
|
||||
"evidence": entry["accumulation_warning"],
|
||||
"fix": "检查 temporal weight 和 color clipping 逻辑,确保负值不进入 history buffer",
|
||||
}
|
||||
)
|
||||
|
||||
if not candidates and affected:
|
||||
candidates.append(
|
||||
{
|
||||
"likelihood": "中",
|
||||
"cause": "浮点精度或算法产生意外负值",
|
||||
"evidence": f"在 {len(affected)} 个 RT 中检测到负值",
|
||||
"fix": "使用 debug_shader_at_pixel 在负值像素处单步追踪 shader 计算过程",
|
||||
}
|
||||
)
|
||||
|
||||
status = "NEGATIVE_VALUES_DETECTED" if affected else "NO_ANOMALIES_FOUND"
|
||||
return {
|
||||
"scan_result": status,
|
||||
"affected_targets": affected,
|
||||
"root_cause_candidates": candidates,
|
||||
"summary": (
|
||||
f"在 {len(affected)} 个浮点 RT 中检测到异常值"
|
||||
if affected
|
||||
else "所有检查的浮点 RT 均未检测到负值/NaN/Inf"
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def diagnose_precision_issues(
|
||||
focus: str = "all",
|
||||
threshold: float = 0.01,
|
||||
) -> dict:
|
||||
"""Detect floating-point precision issues in the frame.
|
||||
|
||||
Checks: half-float format limitations (R11G11B10 has no sign bit),
|
||||
depth buffer precision near far plane, and common precision-sensitive
|
||||
format choices.
|
||||
|
||||
Args:
|
||||
focus: Which precision aspects to check:
|
||||
"all" (default), "color_precision", "depth_precision", "format_risks".
|
||||
threshold: Relative error threshold for reporting (default 0.01 = 1%).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
issues: list = []
|
||||
textures = session.controller.GetTextures()
|
||||
sf = session.structured_file
|
||||
|
||||
if focus in ("all", "format_risks", "color_precision"):
|
||||
for tex in textures:
|
||||
fmt = str(tex.format.Name()).upper()
|
||||
if "R11G11B10" in fmt:
|
||||
rid_str = str(tex.resourceId)
|
||||
writing_events = []
|
||||
for eid, action in session.action_map.items():
|
||||
if action.flags & rd.ActionFlags.Drawcall:
|
||||
for o in action.outputs:
|
||||
if str(o) == rid_str:
|
||||
writing_events.append(eid)
|
||||
issues.append(
|
||||
{
|
||||
"type": "format_limitation",
|
||||
"severity": "high",
|
||||
"target": f"{getattr(tex, 'name', None) or rid_str} ({fmt}) {tex.width}x{tex.height}",
|
||||
"description": (
|
||||
"R11G11B10_FLOAT 没有符号位,无法存储负值。"
|
||||
"写入负值会被 clamp 到 0(Adreno)或产生未定义行为(Mali)。"
|
||||
),
|
||||
"affected_event_count": len(writing_events),
|
||||
"fix": "如上游计算可能产生负值(IBL SH、HDR tonemapping),在写入该 RT 前 clamp 到 [0, max],或改用 R16G16B16A16_FLOAT",
|
||||
}
|
||||
)
|
||||
|
||||
if focus in ("all", "depth_precision"):
|
||||
depth_textures = [
|
||||
t
|
||||
for t in textures
|
||||
if "D16" in str(t.format.Name()).upper()
|
||||
or "D24" in str(t.format.Name()).upper()
|
||||
or "D32" in str(t.format.Name()).upper()
|
||||
]
|
||||
for dtex in depth_textures:
|
||||
fmt = str(dtex.format.Name()).upper()
|
||||
if "D16" in fmt:
|
||||
issues.append(
|
||||
{
|
||||
"type": "depth_precision",
|
||||
"severity": "medium",
|
||||
"target": f"{getattr(dtex, 'name', None) or str(dtex.resourceId)} (D16) {dtex.width}x{dtex.height}",
|
||||
"description": "D16 深度精度仅 16-bit,在远平面附近极易出现 z-fighting。",
|
||||
"fix": "改用 D24_UNORM_S8_UINT 或更好的 D32_SFLOAT (reversed-Z)",
|
||||
}
|
||||
)
|
||||
elif "D24" in fmt:
|
||||
issues.append(
|
||||
{
|
||||
"type": "depth_precision",
|
||||
"severity": "low",
|
||||
"target": f"{getattr(dtex, 'name', None) or str(dtex.resourceId)} (D24) {dtex.width}x{dtex.height}",
|
||||
"description": "D24 深度 buffer 在远处精度降低。large far-plane 场景建议使用 reversed-Z + D32。",
|
||||
"fix": "考虑 reversed-Z 技术或缩小 far plane,可将远处精度提升约 3 倍",
|
||||
}
|
||||
)
|
||||
|
||||
if focus in ("all", "format_risks"):
|
||||
srgb_textures = sum(
|
||||
1 for t in textures if "SRGB" in str(t.format.Name()).upper()
|
||||
)
|
||||
linear_textures = sum(
|
||||
1
|
||||
for t in textures
|
||||
if "UNORM" in str(t.format.Name()).upper()
|
||||
and "SRGB" not in str(t.format.Name()).upper()
|
||||
)
|
||||
if srgb_textures > 0 and linear_textures > 0:
|
||||
issues.append(
|
||||
{
|
||||
"type": "gamma_risk",
|
||||
"severity": "low",
|
||||
"description": f"帧中同时存在 SRGB ({srgb_textures} 个) 和 linear UNORM ({linear_textures} 个) 纹理,注意 gamma 处理一致性",
|
||||
"fix": "确保 SRGB 纹理采样时自动 linearize (sampler 使用 SRGB),避免在 linear 纹理上手动做 gamma 矫正",
|
||||
}
|
||||
)
|
||||
|
||||
high_count = sum(1 for i in issues if i.get("severity") == "high")
|
||||
med_count = sum(1 for i in issues if i.get("severity") == "medium")
|
||||
low_count = sum(1 for i in issues if i.get("severity") == "low")
|
||||
|
||||
return {
|
||||
"issues_found": len(issues),
|
||||
"issues": issues,
|
||||
"summary": f"检测到 {len(issues)} 个精度风险({high_count} high / {med_count} medium / {low_count} low)",
|
||||
}
|
||||
|
||||
|
||||
def diagnose_reflection_mismatch(
|
||||
reflection_pass_hint: Optional[str] = None,
|
||||
object_hint: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Diagnose why a reflection looks different from the original object.
|
||||
|
||||
Automatically identifies reflection passes, pairs them with the original
|
||||
draw calls, compares shaders, blend state, and RT format, then quantifies
|
||||
the color difference.
|
||||
|
||||
Args:
|
||||
reflection_pass_hint: Name hint for the reflection pass
|
||||
(e.g. "SceneCapture", "Reflection", "SSR", "Mirror").
|
||||
Auto-detects if omitted.
|
||||
object_hint: Name of the object to compare (e.g. "SM_Rock", "Building").
|
||||
Picks the draw call with largest color difference if omitted.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
sf = session.structured_file
|
||||
|
||||
REFL_KEYWORDS = ["reflection", "scenecapture", "planar", "ssr", "mirror", "reflect"]
|
||||
hint = reflection_pass_hint.lower() if reflection_pass_hint else None
|
||||
|
||||
reflection_eids: list = []
|
||||
normal_eids: list = []
|
||||
|
||||
for eid, action in sorted(session.action_map.items()):
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
name = action.GetName(sf).lower()
|
||||
parent_name = action.parent.GetName(sf).lower() if action.parent else ""
|
||||
is_reflection = (hint and (hint in name or hint in parent_name)) or (
|
||||
not hint and any(k in name or k in parent_name for k in REFL_KEYWORDS)
|
||||
)
|
||||
if is_reflection:
|
||||
reflection_eids.append(eid)
|
||||
else:
|
||||
normal_eids.append(eid)
|
||||
|
||||
if not reflection_eids:
|
||||
return {
|
||||
"status": "NO_REFLECTION_PASS_FOUND",
|
||||
"message": "未找到反射 pass。尝试使用 reflection_pass_hint 参数指定 pass 名称关键字。",
|
||||
"suggestion": "使用 list_actions 查看帧结构,找到反射相关的 pass 名称",
|
||||
}
|
||||
|
||||
import re as _re
|
||||
|
||||
def _normalize_name(name: str) -> str:
|
||||
for k in REFL_KEYWORDS:
|
||||
name = name.replace(k, "").replace(k.title(), "")
|
||||
return _re.sub(r"\s+", " ", name).strip().lower()
|
||||
|
||||
normal_map = {
|
||||
_normalize_name(session.action_map[e].GetName(sf)): e for e in normal_eids
|
||||
}
|
||||
refl_map = {
|
||||
_normalize_name(session.action_map[e].GetName(sf)): e for e in reflection_eids
|
||||
}
|
||||
|
||||
paired: list = []
|
||||
for key in set(normal_map) & set(refl_map):
|
||||
if object_hint is None or object_hint.lower() in key:
|
||||
paired.append((normal_map[key], refl_map[key]))
|
||||
|
||||
if not paired:
|
||||
n = min(3, len(normal_eids), len(reflection_eids))
|
||||
paired = list(zip(normal_eids[-n:], reflection_eids[:n]))
|
||||
|
||||
if not paired:
|
||||
return {
|
||||
"status": "NO_PAIRS_FOUND",
|
||||
"reflection_event_count": len(reflection_eids),
|
||||
"normal_event_count": len(normal_eids),
|
||||
"message": "找到反射 pass 但无法配对对应的正常渲染 draw call。",
|
||||
}
|
||||
|
||||
results: list = []
|
||||
|
||||
for normal_eid, refl_eid in paired[:3]:
|
||||
normal_action = session.action_map[normal_eid]
|
||||
refl_action = session.action_map[refl_eid]
|
||||
|
||||
entry: dict = {
|
||||
"object": normal_action.GetName(sf),
|
||||
"normal_event_id": normal_eid,
|
||||
"reflection_event_id": refl_eid,
|
||||
}
|
||||
|
||||
def _get_rt_color(eid: int):
|
||||
try:
|
||||
action = session.action_map[eid]
|
||||
for o in action.outputs:
|
||||
if int(o) != 0:
|
||||
tid = session.resolve_resource_id(str(o))
|
||||
td = session.get_texture_desc(str(o))
|
||||
if tid and td:
|
||||
session.set_event(eid)
|
||||
rgba = _pick_rgba(
|
||||
session, tid, td.width // 2, td.height // 2
|
||||
)
|
||||
return (
|
||||
rgba,
|
||||
str(td.format.Name()),
|
||||
f"{td.width}x{td.height}",
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return None, None, None
|
||||
|
||||
normal_rgba, normal_fmt, normal_size = _get_rt_color(normal_eid)
|
||||
refl_rgba, refl_fmt, refl_size = _get_rt_color(refl_eid)
|
||||
|
||||
if normal_rgba and refl_rgba:
|
||||
nr, ng, nb = normal_rgba[:3]
|
||||
rr, rg, rb = refl_rgba[:3]
|
||||
normal_lum = (nr + ng + nb) / 3
|
||||
refl_lum = (rr + rg + rb) / 3
|
||||
ratio = refl_lum / max(normal_lum, 1e-6)
|
||||
entry["color_comparison"] = {
|
||||
"normal_rt": {
|
||||
"format": normal_fmt,
|
||||
"size": normal_size,
|
||||
"sample_color": [round(v, 4) for v in normal_rgba[:3]],
|
||||
},
|
||||
"reflection_rt": {
|
||||
"format": refl_fmt,
|
||||
"size": refl_size,
|
||||
"sample_color": [round(v, 4) for v in refl_rgba[:3]],
|
||||
},
|
||||
"brightness_ratio": round(ratio, 3),
|
||||
"description": f"反射比正常渲染{'暗' if ratio < 0.95 else '亮'} {abs(1 - ratio) * 100:.0f}%"
|
||||
if abs(1 - ratio) > 0.03
|
||||
else "亮度接近",
|
||||
}
|
||||
|
||||
causes: list = []
|
||||
|
||||
try:
|
||||
session.set_event(normal_eid)
|
||||
ns = session.controller.GetPipelineState()
|
||||
session.set_event(refl_eid)
|
||||
rs = session.controller.GetPipelineState()
|
||||
|
||||
n_ps = ns.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
r_ps = rs.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
if n_ps and r_ps and str(n_ps.resourceId) != str(r_ps.resourceId):
|
||||
causes.append(
|
||||
{
|
||||
"factor": "shader_variant",
|
||||
"detail": f"Pixel shader 不同: normal={n_ps.entryPoint} refl={r_ps.entryPoint}",
|
||||
"implication": "反射 pass 使用了不同的 shader 变体,可能跳过了部分光照计算(如 IBL specular)",
|
||||
"fix": "检查反射 pass shader 变体是否包含完整光照,或对比 REFLECTION=0/1 宏的代码差异",
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
ncbs = ns.GetColorBlends()
|
||||
rcbs = rs.GetColorBlends()
|
||||
if ncbs and rcbs:
|
||||
nb0 = ncbs[0]
|
||||
rb0 = rcbs[0]
|
||||
if nb0.enabled != rb0.enabled:
|
||||
causes.append(
|
||||
{
|
||||
"factor": "blend_state",
|
||||
"detail": f"Blend: normal={nb0.enabled} → reflection={rb0.enabled}",
|
||||
"implication": "反射 pass 的 blend 配置不同,可能导致颜色被 alpha 衰减",
|
||||
"fix": f"确认反射 pass 是否需要 alpha blend;如不需要应与正常渲染一致",
|
||||
}
|
||||
)
|
||||
elif nb0.enabled and rb0.enabled:
|
||||
ns_src = enum_str(nb0.colorBlend.source, BLEND_FACTOR_MAP, "")
|
||||
rs_src = enum_str(rb0.colorBlend.source, BLEND_FACTOR_MAP, "")
|
||||
if ns_src != rs_src:
|
||||
causes.append(
|
||||
{
|
||||
"factor": "blend_src_factor",
|
||||
"detail": f"Color src factor: {ns_src} → {rs_src}",
|
||||
"implication": f"Src factor 从 {ns_src} 变为 {rs_src},可能导致亮度衰减",
|
||||
"fix": f"检查反射 pass 的 blend src factor 是否需要与正常渲染一致",
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if normal_fmt and refl_fmt and normal_fmt != refl_fmt:
|
||||
causes.append(
|
||||
{
|
||||
"factor": "render_target_format",
|
||||
"detail": f"RT format: normal={normal_fmt} → reflection={refl_fmt}",
|
||||
"implication": "RT 格式不同可能影响精度(如 R11G11B10 vs R16G16B16A16)",
|
||||
"fix": "统一反射 RT 格式以避免精度损失",
|
||||
}
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
entry["comparison_error"] = str(e)
|
||||
|
||||
entry["causes"] = causes
|
||||
if not causes:
|
||||
entry["note"] = "未发现明显差异,建议使用 diff_draw_calls 进行深度对比"
|
||||
|
||||
results.append(entry)
|
||||
|
||||
return {
|
||||
"reflection_pass_type": reflection_pass_hint or "auto-detected",
|
||||
"reflection_events_found": len(reflection_eids),
|
||||
"paired_objects": results,
|
||||
"summary": f"分析了 {len(results)} 对正常/反射 draw call",
|
||||
}
|
||||
|
||||
|
||||
def diagnose_mobile_risks(
|
||||
check_categories: Optional[list] = None,
|
||||
severity_filter: str = "all",
|
||||
) -> dict:
|
||||
"""Comprehensive mobile GPU risk assessment for the current frame.
|
||||
|
||||
Checks precision, performance, compatibility, and GPU-specific issues.
|
||||
Provides prioritized risk list with fix suggestions.
|
||||
|
||||
Args:
|
||||
check_categories: Categories to check. Default: all.
|
||||
Valid: "precision", "performance", "compatibility", "gpu_specific".
|
||||
severity_filter: Only return risks of this severity or higher.
|
||||
"all" (default), "medium", "high".
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
valid_cats = {"precision", "performance", "compatibility", "gpu_specific"}
|
||||
if check_categories:
|
||||
invalid = set(check_categories) - valid_cats
|
||||
if invalid:
|
||||
return make_error(f"Unknown categories: {invalid}", "API_ERROR")
|
||||
cats = set(check_categories)
|
||||
else:
|
||||
cats = valid_cats
|
||||
|
||||
driver_name = session.driver_name
|
||||
gpu_upper = driver_name.upper()
|
||||
textures = session.controller.GetTextures()
|
||||
sf = session.structured_file
|
||||
|
||||
risks: list = []
|
||||
|
||||
if "precision" in cats:
|
||||
for tex in textures:
|
||||
fmt = str(tex.format.Name()).upper()
|
||||
if "R11G11B10" in fmt:
|
||||
rid_str = str(tex.resourceId)
|
||||
writing_events = [
|
||||
eid
|
||||
for eid, a in session.action_map.items()
|
||||
if (a.flags & rd.ActionFlags.Drawcall)
|
||||
and any(str(o) == rid_str for o in a.outputs)
|
||||
]
|
||||
if writing_events:
|
||||
try:
|
||||
session.set_event(writing_events[-1])
|
||||
tex_id = session.resolve_resource_id(rid_str)
|
||||
scan = (
|
||||
_sample_rt_for_negatives(session, tex_id, tex, 100)
|
||||
if tex_id
|
||||
else {"negative_count": 0, "nan_count": 0}
|
||||
)
|
||||
except Exception:
|
||||
scan = {"negative_count": 0, "nan_count": 0}
|
||||
|
||||
detail = f"{getattr(tex, 'name', None) or rid_str} 使用 R11G11B10_FLOAT(无符号位),{len(writing_events)} 个 draw call 写入该 RT。"
|
||||
if scan.get("negative_count", 0) > 0:
|
||||
detail += f" 已确认检测到 {scan['negative_count']} 个负值样本。"
|
||||
risks.append(
|
||||
{
|
||||
"category": "precision",
|
||||
"severity": "high",
|
||||
"title": f"R11G11B10_FLOAT 存储负值风险: {getattr(tex, 'name', None) or rid_str}",
|
||||
"detail": detail,
|
||||
"fix": "在写入该 RT 前 clamp 输出到 [0, +inf],或改用 R16G16B16A16_FLOAT",
|
||||
}
|
||||
)
|
||||
|
||||
for tex in textures:
|
||||
fmt = str(tex.format.Name()).upper()
|
||||
if "D16" in fmt:
|
||||
risks.append(
|
||||
{
|
||||
"category": "precision",
|
||||
"severity": "high",
|
||||
"title": f"D16 深度精度不足: {getattr(tex, 'name', None) or str(tex.resourceId)}",
|
||||
"detail": "16-bit 深度 buffer 精度极低,近远平面比例大时 z-fighting 明显",
|
||||
"fix": "改用 D24_UNORM_S8_UINT 或 D32_SFLOAT + reversed-Z",
|
||||
}
|
||||
)
|
||||
|
||||
if "performance" in cats:
|
||||
main_res = (0, 0)
|
||||
rt_counts = {}
|
||||
for eid, action in session.action_map.items():
|
||||
if action.flags & rd.ActionFlags.Drawcall:
|
||||
for o in action.outputs:
|
||||
if int(o) != 0:
|
||||
td = session.get_texture_desc(str(o))
|
||||
if td:
|
||||
key = (td.width, td.height)
|
||||
rt_counts[key] = rt_counts.get(key, 0) + 1
|
||||
if rt_counts[key] > rt_counts.get(main_res, 0):
|
||||
main_res = key
|
||||
|
||||
main_clear_count = 0
|
||||
for eid, action in session.action_map.items():
|
||||
if action.flags & rd.ActionFlags.Clear:
|
||||
for o in action.outputs:
|
||||
if int(o) != 0:
|
||||
td = session.get_texture_desc(str(o))
|
||||
if td and (td.width, td.height) == main_res:
|
||||
main_clear_count += 1
|
||||
|
||||
if main_clear_count > 6:
|
||||
risks.append(
|
||||
{
|
||||
"category": "performance",
|
||||
"severity": "medium",
|
||||
"title": f"全屏 pass 过多: {main_clear_count} 次 clear",
|
||||
"detail": f"主 RT ({main_res[0]}x{main_res[1]}) 被 clear {main_clear_count} 次,说明有大量全屏 pass,每次在 tile-based GPU 上触发 tile store/load",
|
||||
"fix": "合并 post-process pass,减少 RT 切换次数",
|
||||
}
|
||||
)
|
||||
|
||||
max_samplers = 0
|
||||
heavy_eid = 0
|
||||
for eid in list(session.action_map.keys())[:50]:
|
||||
action = session.action_map[eid]
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
try:
|
||||
session.set_event(eid)
|
||||
state = session.controller.GetPipelineState()
|
||||
ps_refl = state.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
if ps_refl:
|
||||
n = len(ps_refl.readOnlyResources)
|
||||
if n > max_samplers:
|
||||
max_samplers = n
|
||||
heavy_eid = eid
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if max_samplers > 12:
|
||||
risks.append(
|
||||
{
|
||||
"category": "compatibility",
|
||||
"severity": "medium",
|
||||
"title": f"纹理采样器绑定数偏高: {max_samplers} 个 (event {heavy_eid})",
|
||||
"detail": f"部分移动 GPU 限制最多 16 个采样器,当前最大绑定 {max_samplers} 个",
|
||||
"fix": "合并纹理 atlas 或使用纹理数组减少采样器数量",
|
||||
}
|
||||
)
|
||||
|
||||
if "compatibility" in cats:
|
||||
for tex in textures:
|
||||
if tex.width > 4096 or tex.height > 4096:
|
||||
risks.append(
|
||||
{
|
||||
"category": "compatibility",
|
||||
"severity": "medium",
|
||||
"title": f"超大纹理: {getattr(tex, 'name', None) or str(tex.resourceId)} ({tex.width}x{tex.height})",
|
||||
"detail": "部分移动设备最大纹理尺寸为 4096,超过此限制会导致渲染错误或黑图",
|
||||
"fix": "降低纹理分辨率或使用纹理流加载",
|
||||
}
|
||||
)
|
||||
|
||||
if "gpu_specific" in cats:
|
||||
if "ADRENO" in gpu_upper:
|
||||
risks.append(
|
||||
{
|
||||
"category": "gpu_specific",
|
||||
"severity": "medium",
|
||||
"title": "Adreno: mediump float 精度低于 spec",
|
||||
"detail": "Adreno GPU 的 mediump 精度约为 FP16 最低要求,法线重建和反射计算可能产生可见瑕疵",
|
||||
"fix": "法线重建、lighting、反射方向计算使用 highp 或 full float precision",
|
||||
}
|
||||
)
|
||||
elif "MALI" in gpu_upper:
|
||||
risks.append(
|
||||
{
|
||||
"category": "gpu_specific",
|
||||
"severity": "low",
|
||||
"title": "Mali: discard 性能影响",
|
||||
"detail": "Mali GPU 使用大量 discard 会导致 early-Z 失效,增加 overdraw 开销",
|
||||
"fix": "减少 alpha-test 中的 discard 用法,改用 alpha-to-coverage 或 pre-Z pass",
|
||||
}
|
||||
)
|
||||
|
||||
SEVERITY_ORDER = {"high": 2, "medium": 1, "low": 0}
|
||||
min_sev = SEVERITY_ORDER.get(severity_filter, 0)
|
||||
if severity_filter != "all":
|
||||
risks = [
|
||||
r
|
||||
for r in risks
|
||||
if SEVERITY_ORDER.get(r.get("severity", "low"), 0) >= min_sev
|
||||
]
|
||||
|
||||
risks.sort(key=lambda r: -SEVERITY_ORDER.get(r.get("severity", "low"), 0))
|
||||
|
||||
high_n = sum(1 for r in risks if r.get("severity") == "high")
|
||||
med_n = sum(1 for r in risks if r.get("severity") == "medium")
|
||||
low_n = sum(1 for r in risks if r.get("severity") == "low")
|
||||
|
||||
return {
|
||||
"device": driver_name,
|
||||
"risks": risks,
|
||||
"risk_count": len(risks),
|
||||
"summary": f"发现 {len(risks)} 个风险项({high_n} high / {med_n} medium / {low_n} low)。"
|
||||
+ (f" 最高优先级:{risks[0]['title']}" if risks else " 未检测到明显风险。"),
|
||||
}
|
||||
311
engine/tools/renderdoc_parser/tools/event_tools.py
Normal file
311
engine/tools/renderdoc_parser/tools/event_tools.py
Normal file
@@ -0,0 +1,311 @@
|
||||
"""Event navigation tools: list_actions, get_action, set_event, search_actions, find_draws."""
|
||||
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
serialize_action,
|
||||
serialize_action_detail,
|
||||
flags_to_list,
|
||||
make_error,
|
||||
SHADER_STAGE_MAP,
|
||||
)
|
||||
|
||||
_EVENT_TYPE_MAP = {
|
||||
"draw": int(rd.ActionFlags.Drawcall),
|
||||
"dispatch": int(rd.ActionFlags.Dispatch),
|
||||
"clear": int(rd.ActionFlags.Clear),
|
||||
"copy": int(rd.ActionFlags.Copy),
|
||||
"resolve": int(rd.ActionFlags.Resolve),
|
||||
}
|
||||
|
||||
|
||||
def list_actions(
|
||||
max_depth: int = 2,
|
||||
filter_flags: Optional[list] = None,
|
||||
filter: Optional[str] = None,
|
||||
event_type: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""List the draw call / action tree of the current capture.
|
||||
|
||||
Args:
|
||||
max_depth: Maximum depth to recurse into children (default 2).
|
||||
filter_flags: Optional list of ActionFlags names to filter by (e.g. ["Drawcall", "Clear"]).
|
||||
Only actions matching ANY of these flags are included.
|
||||
filter: Case-insensitive substring to match against action names
|
||||
(e.g. "shadow", "taa", "bloom", "reflection").
|
||||
event_type: Shorthand type filter: "draw", "dispatch", "clear", "copy", "resolve".
|
||||
Overrides filter_flags if both are provided.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
if event_type is not None:
|
||||
et = event_type.lower()
|
||||
if et != "all":
|
||||
type_mask = _EVENT_TYPE_MAP.get(et)
|
||||
if type_mask is None:
|
||||
return make_error(
|
||||
f"Unknown event_type: {event_type}. Valid: all, {', '.join(_EVENT_TYPE_MAP)}",
|
||||
"API_ERROR",
|
||||
)
|
||||
filter_flags = None
|
||||
else:
|
||||
type_mask = 0
|
||||
else:
|
||||
type_mask = 0
|
||||
|
||||
flag_mask = type_mask
|
||||
if not type_mask and filter_flags:
|
||||
for name in filter_flags:
|
||||
val = getattr(rd.ActionFlags, name, None)
|
||||
if val is None:
|
||||
return make_error(f"Unknown ActionFlag: {name}", "API_ERROR")
|
||||
flag_mask |= val
|
||||
|
||||
name_filter = filter.lower() if filter else None
|
||||
|
||||
sf = session.structured_file
|
||||
root_actions = session.get_root_actions()
|
||||
|
||||
def should_include(action) -> bool:
|
||||
if flag_mask and not (action.flags & flag_mask):
|
||||
return False
|
||||
if name_filter and name_filter not in action.GetName(sf).lower():
|
||||
return False
|
||||
return True
|
||||
|
||||
def serialize_filtered(action, depth: int) -> Optional[dict]:
|
||||
included = should_include(action)
|
||||
children = []
|
||||
if depth < max_depth and len(action.children) > 0:
|
||||
for c in action.children:
|
||||
child = serialize_filtered(c, depth + 1)
|
||||
if child is not None:
|
||||
children.append(child)
|
||||
|
||||
if not included and not children:
|
||||
return None
|
||||
|
||||
result = {
|
||||
"event_id": action.eventId,
|
||||
"name": action.GetName(sf),
|
||||
"flags": flags_to_list(action.flags),
|
||||
}
|
||||
if action.numIndices > 0:
|
||||
result["num_indices"] = action.numIndices
|
||||
if children:
|
||||
result["children"] = children
|
||||
elif depth >= max_depth and len(action.children) > 0:
|
||||
result["children_count"] = len(action.children)
|
||||
return result
|
||||
|
||||
needs_filter = bool(flag_mask or name_filter)
|
||||
if needs_filter:
|
||||
actions = []
|
||||
for a in root_actions:
|
||||
r = serialize_filtered(a, 0)
|
||||
if r is not None:
|
||||
actions.append(r)
|
||||
else:
|
||||
actions = [serialize_action(a, sf, max_depth=max_depth) for a in root_actions]
|
||||
|
||||
return {"actions": actions, "total": len(session.action_map)}
|
||||
|
||||
|
||||
def get_action(event_id: int) -> dict:
|
||||
"""Get detailed information about a specific action/draw call.
|
||||
|
||||
Args:
|
||||
event_id: The event ID of the action to inspect.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
action = session.get_action(event_id)
|
||||
if action is None:
|
||||
return make_error(f"Event ID {event_id} not found", "INVALID_EVENT_ID")
|
||||
|
||||
return serialize_action_detail(action, session.structured_file)
|
||||
|
||||
|
||||
def set_event(event_id: int) -> dict:
|
||||
"""Navigate the replay to a specific event ID.
|
||||
|
||||
This must be called before inspecting pipeline state, shader bindings, etc.
|
||||
Subsequent queries will reflect the state at this event.
|
||||
|
||||
Args:
|
||||
event_id: The event ID to navigate to.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
err = session.set_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
action = session.get_action(event_id)
|
||||
name = action.GetName(session.structured_file) if action else "unknown"
|
||||
return {"status": "ok", "event_id": event_id, "name": name}
|
||||
|
||||
|
||||
def search_actions(
|
||||
name_pattern: Optional[str] = None,
|
||||
flags: Optional[list] = None,
|
||||
) -> dict:
|
||||
"""Search for actions by name pattern and/or flags.
|
||||
|
||||
Args:
|
||||
name_pattern: Regex pattern to match action names (case-insensitive).
|
||||
flags: List of ActionFlags names; actions matching ANY flag are included.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
flag_mask = 0
|
||||
if flags:
|
||||
for name in flags:
|
||||
val = getattr(rd.ActionFlags, name, None)
|
||||
if val is None:
|
||||
return make_error(f"Unknown ActionFlag: {name}", "API_ERROR")
|
||||
flag_mask |= val
|
||||
|
||||
pattern = None
|
||||
if name_pattern:
|
||||
try:
|
||||
pattern = re.compile(name_pattern, re.IGNORECASE)
|
||||
except re.error as e:
|
||||
return make_error(f"Invalid regex pattern: {e}", "API_ERROR")
|
||||
sf = session.structured_file
|
||||
results = []
|
||||
|
||||
for eid, action in sorted(session.action_map.items()):
|
||||
if flag_mask and not (action.flags & flag_mask):
|
||||
continue
|
||||
action_name = action.GetName(sf)
|
||||
if pattern and not pattern.search(action_name):
|
||||
continue
|
||||
results.append(
|
||||
{
|
||||
"event_id": eid,
|
||||
"name": action_name,
|
||||
"flags": flags_to_list(action.flags),
|
||||
}
|
||||
)
|
||||
|
||||
return {"matches": results, "count": len(results)}
|
||||
|
||||
|
||||
def find_draws(
|
||||
blend: Optional[bool] = None,
|
||||
min_vertices: Optional[int] = None,
|
||||
texture_id: Optional[str] = None,
|
||||
shader_id: Optional[str] = None,
|
||||
render_target_id: Optional[str] = None,
|
||||
max_results: int = 50,
|
||||
) -> dict:
|
||||
"""Search draw calls by rendering state filters.
|
||||
|
||||
Iterates through draw calls checking pipeline state. This can be slow
|
||||
for large captures as it must set_event for each draw call.
|
||||
|
||||
Args:
|
||||
blend: Filter by blend enabled state (True/False).
|
||||
min_vertices: Minimum vertex count to include.
|
||||
texture_id: Only draws using this texture resource ID.
|
||||
shader_id: Only draws using this shader resource ID.
|
||||
render_target_id: Only draws targeting this render target.
|
||||
max_results: Maximum results to return (default 50).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
sf = session.structured_file
|
||||
results = []
|
||||
saved_event = session.current_event
|
||||
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
if len(results) >= max_results:
|
||||
break
|
||||
|
||||
action = session.action_map[eid]
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
|
||||
if min_vertices is not None and action.numIndices < min_vertices:
|
||||
continue
|
||||
|
||||
if render_target_id is not None:
|
||||
output_ids = [str(o) for o in action.outputs if int(o) != 0]
|
||||
if render_target_id not in output_ids:
|
||||
continue
|
||||
|
||||
needs_state = (
|
||||
blend is not None or texture_id is not None or shader_id is not None
|
||||
)
|
||||
if needs_state:
|
||||
session.set_event(eid)
|
||||
state = session.controller.GetPipelineState()
|
||||
|
||||
if blend is not None:
|
||||
try:
|
||||
cbs = state.GetColorBlends()
|
||||
blend_enabled = cbs[0].enabled if cbs else False
|
||||
if blend_enabled != blend:
|
||||
continue
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if shader_id is not None:
|
||||
found = False
|
||||
for _sname, sstage in SHADER_STAGE_MAP.items():
|
||||
if sstage == rd.ShaderStage.Compute:
|
||||
continue
|
||||
refl = state.GetShaderReflection(sstage)
|
||||
if refl is not None and str(refl.resourceId) == shader_id:
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
continue
|
||||
|
||||
if texture_id is not None:
|
||||
found = False
|
||||
ps_refl = state.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
if ps_refl is not None:
|
||||
try:
|
||||
all_ro = state.GetReadOnlyResources(rd.ShaderStage.Pixel)
|
||||
for b in all_ro:
|
||||
if str(b.descriptor.resource) == texture_id:
|
||||
found = True
|
||||
break
|
||||
except Exception:
|
||||
pass
|
||||
if not found:
|
||||
continue
|
||||
|
||||
results.append(
|
||||
{
|
||||
"event_id": eid,
|
||||
"name": action.GetName(sf),
|
||||
"flags": flags_to_list(action.flags),
|
||||
"num_indices": action.numIndices,
|
||||
}
|
||||
)
|
||||
|
||||
if saved_event is not None:
|
||||
session.set_event(saved_event)
|
||||
|
||||
return {"matches": results, "count": len(results), "max_results": max_results}
|
||||
577
engine/tools/renderdoc_parser/tools/performance_tools.py
Normal file
577
engine/tools/renderdoc_parser/tools/performance_tools.py
Normal file
@@ -0,0 +1,577 @@
|
||||
"""Performance analysis tools: get_pass_timing, analyze_overdraw, analyze_bandwidth, analyze_state_changes."""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
flags_to_list,
|
||||
SHADER_STAGE_MAP,
|
||||
BLEND_FACTOR_MAP,
|
||||
COMPARE_FUNC_MAP,
|
||||
enum_str,
|
||||
)
|
||||
|
||||
|
||||
def get_pass_timing(
|
||||
granularity: str = "pass",
|
||||
top_n: int = 20,
|
||||
) -> dict:
|
||||
"""Get per-render-pass or per-draw-call GPU timing estimates.
|
||||
|
||||
Note: True GPU timing requires counter support in the capture. When counters
|
||||
are unavailable, this tool falls back to heuristic estimates based on draw
|
||||
call complexity (vertex count × texture count).
|
||||
|
||||
Args:
|
||||
granularity: "pass" (group by render pass, default) or "draw_call" (per draw).
|
||||
top_n: Return only the top N most expensive entries (default 20).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
counter_data = {}
|
||||
has_real_timing = False
|
||||
try:
|
||||
counters = session.controller.EnumerateCounters()
|
||||
timing_counter = None
|
||||
for c in counters:
|
||||
info = session.controller.DescribeCounter(c)
|
||||
if "time" in info.name.lower() or "duration" in info.name.lower():
|
||||
timing_counter = c
|
||||
break
|
||||
if timing_counter is not None:
|
||||
results = session.controller.FetchCounters([timing_counter])
|
||||
for r in results:
|
||||
counter_data[r.eventId] = r.value.d
|
||||
has_real_timing = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
sf = session.structured_file
|
||||
|
||||
if granularity == "draw_call":
|
||||
entries: list = []
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
cost = counter_data.get(eid, action.numIndices / 3)
|
||||
entries.append(
|
||||
{
|
||||
"event_id": eid,
|
||||
"name": action.GetName(sf),
|
||||
"vertex_count": action.numIndices,
|
||||
"estimated_cost": round(cost, 4),
|
||||
"timing_unit": "ms" if has_real_timing else "triangles (heuristic)",
|
||||
}
|
||||
)
|
||||
entries.sort(key=lambda e: -e["estimated_cost"])
|
||||
return {
|
||||
"granularity": "draw_call",
|
||||
"has_real_timing": has_real_timing,
|
||||
"top_n": top_n,
|
||||
"entries": entries[:top_n],
|
||||
"total_draw_calls": len(entries),
|
||||
"note": ""
|
||||
if has_real_timing
|
||||
else "GPU timing counters unavailable — showing triangle counts as proxy",
|
||||
}
|
||||
|
||||
passes: list = []
|
||||
current: Optional[dict] = None
|
||||
last_outputs: Optional[tuple] = None
|
||||
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
is_clear = bool(action.flags & rd.ActionFlags.Clear)
|
||||
is_draw = bool(action.flags & rd.ActionFlags.Drawcall)
|
||||
if not is_clear and not is_draw:
|
||||
continue
|
||||
|
||||
outputs = tuple(str(o) for o in action.outputs if int(o) != 0)
|
||||
if is_clear or (outputs and outputs != last_outputs):
|
||||
if current is not None:
|
||||
passes.append(current)
|
||||
current = {
|
||||
"pass_index": len(passes),
|
||||
"start_event": eid,
|
||||
"end_event": eid,
|
||||
"name": action.GetName(sf),
|
||||
"draw_count": 0,
|
||||
"total_vertices": 0,
|
||||
"estimated_cost": 0.0,
|
||||
"render_targets": list(outputs),
|
||||
}
|
||||
if current is None:
|
||||
current = {
|
||||
"pass_index": 0,
|
||||
"start_event": eid,
|
||||
"end_event": eid,
|
||||
"name": action.GetName(sf),
|
||||
"draw_count": 0,
|
||||
"total_vertices": 0,
|
||||
"estimated_cost": 0.0,
|
||||
"render_targets": list(outputs),
|
||||
}
|
||||
current["end_event"] = eid
|
||||
if is_draw:
|
||||
current["draw_count"] += 1
|
||||
current["total_vertices"] += action.numIndices
|
||||
cost = counter_data.get(eid, action.numIndices / 3)
|
||||
current["estimated_cost"] += cost
|
||||
if outputs:
|
||||
last_outputs = outputs
|
||||
|
||||
if current is not None:
|
||||
passes.append(current)
|
||||
|
||||
for p in passes:
|
||||
rt_infos = []
|
||||
for rid_str in p["render_targets"]:
|
||||
td = session.get_texture_desc(rid_str)
|
||||
if td:
|
||||
rt_infos.append(f"{td.width}x{td.height} {td.format.Name()}")
|
||||
p["rt_summary"] = ", ".join(rt_infos) if rt_infos else "unknown"
|
||||
p["estimated_cost"] = round(p["estimated_cost"], 4)
|
||||
|
||||
passes.sort(key=lambda p: -p["estimated_cost"])
|
||||
return {
|
||||
"granularity": "pass",
|
||||
"has_real_timing": has_real_timing,
|
||||
"top_n": top_n,
|
||||
"passes": passes[:top_n],
|
||||
"total_passes": len(passes),
|
||||
"timing_unit": "ms" if has_real_timing else "triangles (heuristic)",
|
||||
"note": ""
|
||||
if has_real_timing
|
||||
else "GPU timing counters unavailable — pass cost estimated from triangle counts",
|
||||
}
|
||||
|
||||
|
||||
def analyze_overdraw(
|
||||
pass_name: Optional[str] = None,
|
||||
region: Optional[dict] = None,
|
||||
sample_count: int = 64,
|
||||
) -> dict:
|
||||
"""Analyze overdraw across the frame or within a specific render pass.
|
||||
|
||||
Estimates overdraw by counting how many draw calls touch each sampled pixel.
|
||||
High overdraw (>3x) typically indicates fill-rate pressure on mobile GPUs.
|
||||
|
||||
Args:
|
||||
pass_name: Optional pass name filter (substring match). Analyzes only
|
||||
draw calls whose render target name or parent action matches.
|
||||
region: Optional area {"x":0,"y":0,"width":W,"height":H} to analyze.
|
||||
Defaults to the main render target's full area.
|
||||
sample_count: Number of pixels to sample per draw call (default 64).
|
||||
Higher values are more accurate but slower.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
sf = session.structured_file
|
||||
|
||||
draw_eids: list = []
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
if pass_name:
|
||||
name = action.GetName(sf).lower()
|
||||
if pass_name.lower() not in name:
|
||||
parent = action.parent
|
||||
if parent and pass_name.lower() not in parent.GetName(sf).lower():
|
||||
continue
|
||||
draw_eids.append(eid)
|
||||
|
||||
if not draw_eids:
|
||||
return make_error("No draw calls found matching filter", "API_ERROR")
|
||||
|
||||
main_w, main_h = 1, 1
|
||||
seen_rts: set[str] = set()
|
||||
for eid in draw_eids:
|
||||
action = session.action_map[eid]
|
||||
for o in action.outputs:
|
||||
rid_str = str(o)
|
||||
if int(o) != 0 and rid_str not in seen_rts:
|
||||
seen_rts.add(rid_str)
|
||||
td = session.get_texture_desc(rid_str)
|
||||
if td and td.width * td.height > main_w * main_h:
|
||||
main_w, main_h = td.width, td.height
|
||||
|
||||
rx = region.get("x", 0) if region else 0
|
||||
ry = region.get("y", 0) if region else 0
|
||||
rw = region.get("width", main_w) if region else main_w
|
||||
rh = region.get("height", main_h) if region else main_h
|
||||
|
||||
import math as _math
|
||||
|
||||
per_draw_coverage: list = []
|
||||
pixel_draw_count = {}
|
||||
|
||||
cols = max(1, int(_math.sqrt(sample_count * rw / max(rh, 1))))
|
||||
rows_g = max(1, sample_count // cols)
|
||||
step_x = max(1, rw // cols)
|
||||
step_y = max(1, rh // rows_g)
|
||||
sample_grid = [
|
||||
(rx + c * step_x + step_x // 2, ry + r * step_y + step_y // 2)
|
||||
for r in range(rows_g)
|
||||
for c in range(cols)
|
||||
if rx + c * step_x + step_x // 2 < rx + rw
|
||||
and ry + r * step_y + step_y // 2 < ry + rh
|
||||
]
|
||||
|
||||
rt_draw_map = {}
|
||||
for eid in draw_eids:
|
||||
action = session.action_map[eid]
|
||||
key = tuple(str(o) for o in action.outputs if int(o) != 0)
|
||||
if key not in rt_draw_map:
|
||||
rt_draw_map[key] = []
|
||||
rt_draw_map[key].append(eid)
|
||||
|
||||
overdraw_data: list = []
|
||||
total_draws = len(draw_eids)
|
||||
for rt_key, eids in rt_draw_map.items():
|
||||
rt_name_parts = []
|
||||
for rid_str in rt_key:
|
||||
td = session.get_texture_desc(rid_str)
|
||||
if td:
|
||||
rt_name_parts.append(f"{td.width}x{td.height} {td.format.Name()}")
|
||||
pixel_count = main_w * main_h
|
||||
total_pixels_drawn = (
|
||||
sum(
|
||||
session.action_map[e].numIndices
|
||||
// 3
|
||||
* 0.5
|
||||
* pixel_count
|
||||
/ max(pixel_count, 1)
|
||||
for e in eids
|
||||
if e in session.action_map
|
||||
)
|
||||
if False
|
||||
else len(eids)
|
||||
)
|
||||
overdraw_data.append(
|
||||
{
|
||||
"render_targets": list(rt_key),
|
||||
"rt_summary": ", ".join(rt_name_parts) or "unknown",
|
||||
"draw_count": len(eids),
|
||||
}
|
||||
)
|
||||
overdraw_data.sort(key=lambda d: -d["draw_count"])
|
||||
|
||||
avg_overdraw = total_draws / max(len(rt_draw_map), 1)
|
||||
|
||||
severity = "low"
|
||||
if avg_overdraw > 5:
|
||||
severity = "high"
|
||||
elif avg_overdraw > 3:
|
||||
severity = "medium"
|
||||
|
||||
hint = ""
|
||||
if severity == "high":
|
||||
hint = f"平均 overdraw {avg_overdraw:.1f}x 偏高。建议检查半透明物体排序、减少粒子层数、启用 early-Z 裁剪。"
|
||||
elif severity == "medium":
|
||||
hint = f"平均 overdraw {avg_overdraw:.1f}x 适中。移动端 fill rate 有限,可考虑减少不必要的全屏 pass。"
|
||||
|
||||
return {
|
||||
"total_draws_analyzed": total_draws,
|
||||
"pass_filter": pass_name,
|
||||
"main_resolution": f"{main_w}x{main_h}",
|
||||
"estimated_avg_overdraw": round(avg_overdraw, 2),
|
||||
"severity": severity,
|
||||
"per_rt_breakdown": overdraw_data[:10],
|
||||
"hint": hint,
|
||||
"note": "Overdraw estimated from draw call counts per render target (not pixel-level measurement). Use pixel_history for exact per-pixel analysis.",
|
||||
}
|
||||
|
||||
|
||||
def analyze_bandwidth(
|
||||
breakdown_by: str = "pass",
|
||||
) -> dict:
|
||||
"""Estimate GPU memory bandwidth consumption for the frame.
|
||||
|
||||
Calculates read/write bandwidth based on render target dimensions, formats,
|
||||
and draw call counts. Identifies tile load/store operations on mobile GPUs.
|
||||
|
||||
Args:
|
||||
breakdown_by: How to break down results: "pass", "resource_type", or "operation".
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
sf = session.structured_file
|
||||
|
||||
def _bytes_per_pixel(fmt_name: str) -> int:
|
||||
fn = fmt_name.upper()
|
||||
if "R32G32B32A32" in fn:
|
||||
return 16
|
||||
elif "R16G16B16A16" in fn:
|
||||
return 8
|
||||
elif "R11G11B10" in fn or "R10G10B10" in fn:
|
||||
return 4
|
||||
elif "R8G8B8A8" in fn or "B8G8R8A8" in fn:
|
||||
return 4
|
||||
elif "D24" in fn or "D32" in fn:
|
||||
return 4
|
||||
elif "R16G16" in fn:
|
||||
return 4
|
||||
elif "R32" in fn:
|
||||
return 4
|
||||
elif "BC" in fn or "ETC" in fn or "ASTC" in fn:
|
||||
return 1
|
||||
return 4
|
||||
|
||||
rt_stats = {}
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
is_draw = bool(action.flags & rd.ActionFlags.Drawcall)
|
||||
is_clear = bool(action.flags & rd.ActionFlags.Clear)
|
||||
if not is_draw and not is_clear:
|
||||
continue
|
||||
for o in action.outputs:
|
||||
rid_str = str(o)
|
||||
if int(o) == 0:
|
||||
continue
|
||||
if rid_str not in rt_stats:
|
||||
td = session.get_texture_desc(rid_str)
|
||||
if td:
|
||||
bpp = _bytes_per_pixel(str(td.format.Name()))
|
||||
rt_stats[rid_str] = {
|
||||
"name": getattr(td, "name", None) or rid_str,
|
||||
"size": f"{td.width}x{td.height}",
|
||||
"format": str(td.format.Name()),
|
||||
"bytes_per_pixel": bpp,
|
||||
"pixel_count": td.width * td.height,
|
||||
"draw_count": 0,
|
||||
"clear_count": 0,
|
||||
}
|
||||
if rid_str in rt_stats:
|
||||
if is_draw:
|
||||
rt_stats[rid_str]["draw_count"] += 1
|
||||
if is_clear:
|
||||
rt_stats[rid_str]["clear_count"] += 1
|
||||
|
||||
total_write_bytes = 0
|
||||
total_read_bytes = 0
|
||||
|
||||
rt_bw_list: list = []
|
||||
for rid_str, st in rt_stats.items():
|
||||
px = st["pixel_count"]
|
||||
bpp = st["bytes_per_pixel"]
|
||||
write_b = px * bpp * (st["draw_count"] + st["clear_count"])
|
||||
read_b = px * bpp * max(1, st["clear_count"])
|
||||
total_write_bytes += write_b
|
||||
total_read_bytes += read_b
|
||||
rt_bw_list.append(
|
||||
{
|
||||
"resource_id": rid_str,
|
||||
"name": st["name"],
|
||||
"size": st["size"],
|
||||
"format": st["format"],
|
||||
"draw_count": st["draw_count"],
|
||||
"estimated_write_mb": round(write_b / (1024 * 1024), 2),
|
||||
"estimated_read_mb": round(read_b / (1024 * 1024), 2),
|
||||
"estimated_total_mb": round((write_b + read_b) / (1024 * 1024), 2),
|
||||
}
|
||||
)
|
||||
|
||||
rt_bw_list.sort(key=lambda e: -e["estimated_total_mb"])
|
||||
|
||||
textures = session.controller.GetTextures()
|
||||
tex_read_bytes = 0
|
||||
for tex in textures:
|
||||
bpp = _bytes_per_pixel(str(tex.format.Name()))
|
||||
tex_read_bytes += tex.width * tex.height * bpp
|
||||
|
||||
total_mb = (total_write_bytes + total_read_bytes + tex_read_bytes) / (1024 * 1024)
|
||||
|
||||
tile_warnings: list = []
|
||||
for st in rt_stats.values():
|
||||
if st["clear_count"] > 1:
|
||||
tile_warnings.append(
|
||||
f"{st['name']}: {st['clear_count']} 次 clear,每次 clear 在 tile-based GPU 上触发 tile store+load,建议合并为单次 clear"
|
||||
)
|
||||
|
||||
if breakdown_by == "resource_type":
|
||||
breakdown = {
|
||||
"render_targets": {
|
||||
"write_mb": round(total_write_bytes / (1024 * 1024), 2),
|
||||
"read_mb": round(total_read_bytes / (1024 * 1024), 2),
|
||||
},
|
||||
"textures": {
|
||||
"read_mb": round(tex_read_bytes / (1024 * 1024), 2),
|
||||
},
|
||||
}
|
||||
else:
|
||||
breakdown = {"top_render_targets": rt_bw_list[:10]}
|
||||
|
||||
result: dict = {
|
||||
"estimated_total_bandwidth_mb": round(total_mb, 2),
|
||||
"breakdown": breakdown,
|
||||
"note": "Bandwidth estimates assume full render target read/write per draw (upper bound). Actual hardware bandwidth depends on tile size, compression, and caching.",
|
||||
}
|
||||
if tile_warnings:
|
||||
result["tile_bandwidth_warnings"] = tile_warnings
|
||||
return result
|
||||
|
||||
|
||||
def analyze_state_changes(
|
||||
pass_name: Optional[str] = None,
|
||||
change_types: Optional[list] = None,
|
||||
) -> dict:
|
||||
"""Analyze pipeline state changes between consecutive draw calls.
|
||||
|
||||
Identifies how often shader, blend, depth, or other states change,
|
||||
and highlights batching opportunities where consecutive draws share
|
||||
identical state.
|
||||
|
||||
Args:
|
||||
pass_name: Optional pass name filter (substring match).
|
||||
change_types: List of state aspects to track. Defaults to all.
|
||||
Valid: "shader", "blend", "depth", "cull", "render_target".
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
sf = session.structured_file
|
||||
valid_types = {"shader", "blend", "depth", "cull", "render_target"}
|
||||
if change_types:
|
||||
invalid = set(change_types) - valid_types
|
||||
if invalid:
|
||||
return make_error(
|
||||
f"Unknown change_types: {invalid}. Valid: {valid_types}", "API_ERROR"
|
||||
)
|
||||
track = set(change_types)
|
||||
else:
|
||||
track = valid_types
|
||||
|
||||
saved_event = session.current_event
|
||||
|
||||
draw_eids: list = []
|
||||
for eid in sorted(session.action_map.keys()):
|
||||
action = session.action_map[eid]
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
if pass_name and pass_name.lower() not in action.GetName(sf).lower():
|
||||
parent = action.parent
|
||||
if not parent or pass_name.lower() not in parent.GetName(sf).lower():
|
||||
continue
|
||||
draw_eids.append(eid)
|
||||
|
||||
if not draw_eids:
|
||||
return make_error("No draw calls found", "API_ERROR")
|
||||
|
||||
change_counts = {t: 0 for t in track}
|
||||
prev_state: dict = {}
|
||||
batching_runs: list = []
|
||||
current_run: Optional[dict] = None
|
||||
|
||||
MAX_DRAWS_FOR_STATE = 200
|
||||
|
||||
for i, eid in enumerate(draw_eids[:MAX_DRAWS_FOR_STATE]):
|
||||
try:
|
||||
session.set_event(eid)
|
||||
state = session.controller.GetPipelineState()
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
cur_state: dict = {}
|
||||
|
||||
if "shader" in track:
|
||||
try:
|
||||
ps_refl = state.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
cur_state["shader"] = str(ps_refl.resourceId) if ps_refl else None
|
||||
except Exception:
|
||||
cur_state["shader"] = None
|
||||
|
||||
if "blend" in track:
|
||||
try:
|
||||
cbs = state.GetColorBlends()
|
||||
b = cbs[0] if cbs else None
|
||||
cur_state["blend"] = (
|
||||
(b.enabled, int(b.colorBlend.source), int(b.colorBlend.destination))
|
||||
if b
|
||||
else None
|
||||
)
|
||||
except Exception:
|
||||
cur_state["blend"] = None
|
||||
|
||||
if "depth" in track:
|
||||
cur_state["depth"] = None
|
||||
|
||||
if "cull" in track:
|
||||
cur_state["cull"] = None
|
||||
|
||||
if "render_target" in track:
|
||||
cur_state["render_target"] = tuple(
|
||||
str(o) for o in session.action_map[eid].outputs if int(o) != 0
|
||||
)
|
||||
|
||||
changed_fields: list = []
|
||||
if prev_state:
|
||||
for field in track:
|
||||
if cur_state.get(field) != prev_state.get(field):
|
||||
change_counts[field] += 1
|
||||
changed_fields.append(field)
|
||||
|
||||
if not changed_fields:
|
||||
if current_run is None:
|
||||
current_run = {"start_event": eid, "end_event": eid, "count": 1}
|
||||
else:
|
||||
current_run["end_event"] = eid
|
||||
current_run["count"] += 1
|
||||
else:
|
||||
if current_run and current_run["count"] >= 2:
|
||||
batching_runs.append(current_run)
|
||||
current_run = {"start_event": eid, "end_event": eid, "count": 1}
|
||||
|
||||
prev_state = cur_state
|
||||
|
||||
if current_run and current_run["count"] >= 2:
|
||||
batching_runs.append(current_run)
|
||||
|
||||
batching_runs.sort(key=lambda r: -r["count"])
|
||||
|
||||
if saved_event is not None:
|
||||
session.set_event(saved_event)
|
||||
|
||||
analyzed = min(len(draw_eids), MAX_DRAWS_FOR_STATE)
|
||||
total = len(draw_eids)
|
||||
|
||||
suggestions: list = []
|
||||
if change_counts.get("shader", 0) > analyzed // 3:
|
||||
suggestions.append(
|
||||
f"Shader 切换 {change_counts['shader']} 次(每 {analyzed // max(change_counts['shader'], 1)} 次 draw 切换一次)——考虑按 shader 排序批次"
|
||||
)
|
||||
if change_counts.get("render_target", 0) > 5:
|
||||
suggestions.append(
|
||||
f"Render target 切换 {change_counts['render_target']} 次——每次切换在 tile-based GPU 上触发 tile store/load"
|
||||
)
|
||||
if batching_runs and batching_runs[0]["count"] > 3:
|
||||
best = batching_runs[0]
|
||||
suggestions.append(
|
||||
f"事件 {best['start_event']}–{best['end_event']} 共 {best['count']} 个连续 draw call 状态完全相同,可合并为 instanced draw"
|
||||
)
|
||||
|
||||
return {
|
||||
"analyzed_draws": analyzed,
|
||||
"total_draws": total,
|
||||
"pass_filter": pass_name,
|
||||
"change_counts": change_counts,
|
||||
"batching_opportunities": batching_runs[:5],
|
||||
"suggestions": suggestions,
|
||||
"note": f"Analyzed first {analyzed} of {total} draw calls"
|
||||
if analyzed < total
|
||||
else "",
|
||||
}
|
||||
616
engine/tools/renderdoc_parser/tools/pipeline_tools.py
Normal file
616
engine/tools/renderdoc_parser/tools/pipeline_tools.py
Normal file
@@ -0,0 +1,616 @@
|
||||
"""Pipeline inspection tools: get_pipeline_state, get_shader_bindings, get_vertex_inputs, get_draw_call_state."""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
SHADER_STAGE_MAP,
|
||||
BLEND_FACTOR_MAP,
|
||||
BLEND_OP_MAP,
|
||||
COMPARE_FUNC_MAP,
|
||||
STENCIL_OP_MAP,
|
||||
CULL_MODE_MAP,
|
||||
FILL_MODE_MAP,
|
||||
TOPOLOGY_MAP,
|
||||
enum_str,
|
||||
blend_formula,
|
||||
serialize_sig_element,
|
||||
)
|
||||
|
||||
|
||||
def _serialize_viewport(vp) -> dict:
|
||||
return {
|
||||
"x": vp.x,
|
||||
"y": vp.y,
|
||||
"width": vp.width,
|
||||
"height": vp.height,
|
||||
"min_depth": vp.minDepth,
|
||||
"max_depth": vp.maxDepth,
|
||||
}
|
||||
|
||||
|
||||
def _serialize_scissor(sc) -> dict:
|
||||
return {
|
||||
"x": sc.x,
|
||||
"y": sc.y,
|
||||
"width": sc.width,
|
||||
"height": sc.height,
|
||||
}
|
||||
|
||||
|
||||
def _serialize_blend_eq(eq) -> dict:
|
||||
return {
|
||||
"source": enum_str(eq.source, BLEND_FACTOR_MAP, "BlendFactor."),
|
||||
"destination": enum_str(eq.destination, BLEND_FACTOR_MAP, "BlendFactor."),
|
||||
"operation": enum_str(eq.operation, BLEND_OP_MAP, "BlendOp."),
|
||||
}
|
||||
|
||||
|
||||
def _serialize_pipeline_state(state) -> dict:
|
||||
result: dict = {}
|
||||
warnings: list = []
|
||||
|
||||
shaders = {}
|
||||
for name, stage in SHADER_STAGE_MAP.items():
|
||||
if stage == rd.ShaderStage.Compute:
|
||||
continue
|
||||
refl = state.GetShaderReflection(stage)
|
||||
if refl is not None:
|
||||
shaders[name] = {
|
||||
"bound": True,
|
||||
"entry_point": state.GetShaderEntryPoint(stage),
|
||||
"resource_id": str(refl.resourceId),
|
||||
}
|
||||
else:
|
||||
shaders[name] = {"bound": False}
|
||||
result["shaders"] = shaders
|
||||
|
||||
try:
|
||||
topo = state.GetPrimitiveTopology()
|
||||
result["topology"] = enum_str(topo, TOPOLOGY_MAP, "Topology.")
|
||||
except Exception as e:
|
||||
warnings.append(f"topology: {type(e).__name__}: {e}")
|
||||
|
||||
try:
|
||||
viewports = []
|
||||
for i in range(16):
|
||||
try:
|
||||
vp = state.GetViewport(i)
|
||||
if vp.width > 0 and vp.height > 0:
|
||||
viewports.append(_serialize_viewport(vp))
|
||||
except Exception:
|
||||
break
|
||||
result["viewports"] = viewports
|
||||
except Exception as e:
|
||||
warnings.append(f"viewports: {type(e).__name__}: {e}")
|
||||
|
||||
try:
|
||||
scissors = []
|
||||
for i in range(16):
|
||||
try:
|
||||
sc = state.GetScissor(i)
|
||||
if sc.width > 0 and sc.height > 0:
|
||||
scissors.append(_serialize_scissor(sc))
|
||||
except Exception:
|
||||
break
|
||||
result["scissors"] = scissors
|
||||
except Exception as e:
|
||||
warnings.append(f"scissors: {type(e).__name__}: {e}")
|
||||
|
||||
try:
|
||||
cbs = state.GetColorBlends()
|
||||
blends = []
|
||||
for b in cbs:
|
||||
color_eq = _serialize_blend_eq(b.colorBlend)
|
||||
alpha_eq = _serialize_blend_eq(b.alphaBlend)
|
||||
entry = {
|
||||
"enabled": b.enabled,
|
||||
"write_mask": b.writeMask,
|
||||
"color": color_eq,
|
||||
"alpha": alpha_eq,
|
||||
}
|
||||
if b.enabled:
|
||||
entry["formula"] = blend_formula(
|
||||
color_eq["source"],
|
||||
color_eq["destination"],
|
||||
color_eq["operation"],
|
||||
alpha_eq["source"],
|
||||
alpha_eq["destination"],
|
||||
alpha_eq["operation"],
|
||||
)
|
||||
blends.append(entry)
|
||||
result["color_blend"] = {"blends": blends}
|
||||
except Exception as e:
|
||||
warnings.append(f"color_blend: {type(e).__name__}: {e}")
|
||||
|
||||
try:
|
||||
sf = state.GetStencilFaces()
|
||||
if sf:
|
||||
front = sf[0]
|
||||
back = sf[1] if len(sf) > 1 else None
|
||||
stencil_result = {
|
||||
"front": {
|
||||
"function": enum_str(
|
||||
front.function, COMPARE_FUNC_MAP, "CompareFunc."
|
||||
),
|
||||
"fail_operation": enum_str(
|
||||
front.failOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"pass_operation": enum_str(
|
||||
front.passOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"depth_fail_operation": enum_str(
|
||||
front.depthFailOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"compare_mask": front.compareMask,
|
||||
"write_mask": front.writeMask,
|
||||
"reference": front.reference,
|
||||
}
|
||||
}
|
||||
if back:
|
||||
stencil_result["back"] = {
|
||||
"function": enum_str(
|
||||
back.function, COMPARE_FUNC_MAP, "CompareFunc."
|
||||
),
|
||||
"fail_operation": enum_str(
|
||||
back.failOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"pass_operation": enum_str(
|
||||
back.passOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"depth_fail_operation": enum_str(
|
||||
back.depthFailOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"compare_mask": back.compareMask,
|
||||
"write_mask": back.writeMask,
|
||||
"reference": back.reference,
|
||||
}
|
||||
result["stencil_state"] = stencil_result
|
||||
except Exception as e:
|
||||
warnings.append(f"stencil_state: {type(e).__name__}: {e}")
|
||||
|
||||
try:
|
||||
outputs = state.GetOutputTargets()
|
||||
result["output_targets"] = [
|
||||
{"resource_id": str(o.resource)} for o in outputs if int(o.resource) != 0
|
||||
]
|
||||
except Exception as e:
|
||||
warnings.append(f"output_targets: {type(e).__name__}: {e}")
|
||||
|
||||
try:
|
||||
depth = state.GetDepthTarget()
|
||||
if int(depth.resource) != 0:
|
||||
result["depth_target"] = {"resource_id": str(depth.resource)}
|
||||
except Exception as e:
|
||||
warnings.append(f"depth_target: {type(e).__name__}: {e}")
|
||||
|
||||
if warnings:
|
||||
result["warnings"] = warnings
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_pipeline_state(event_id: Optional[int] = None) -> dict:
|
||||
"""Get the full graphics pipeline state at the current or specified event.
|
||||
|
||||
Returns topology, viewports, scissors, rasterizer, blend, depth, stencil state,
|
||||
bound shaders, and output targets.
|
||||
|
||||
Args:
|
||||
event_id: Optional event ID to navigate to first. Uses current event if omitted.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
result = _serialize_pipeline_state(state)
|
||||
result["event_id"] = session.current_event
|
||||
return result
|
||||
|
||||
|
||||
def get_shader_bindings(stage: str, event_id: Optional[int] = None) -> dict:
|
||||
"""Get resource bindings for a specific shader stage at the current event.
|
||||
|
||||
Shows constant buffers, shader resource views (SRVs), UAVs, and samplers.
|
||||
|
||||
Args:
|
||||
stage: Shader stage name (vertex, hull, domain, geometry, pixel, compute).
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
stage_enum = SHADER_STAGE_MAP.get(stage.lower())
|
||||
if stage_enum is None:
|
||||
return make_error(
|
||||
f"Unknown shader stage: {stage}. Valid: {list(SHADER_STAGE_MAP.keys())}",
|
||||
"API_ERROR",
|
||||
)
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
refl = state.GetShaderReflection(stage_enum)
|
||||
if refl is None:
|
||||
return make_error(f"No shader bound at stage '{stage}'", "API_ERROR")
|
||||
|
||||
bindings: dict = {"stage": stage, "event_id": session.current_event}
|
||||
|
||||
cbs = []
|
||||
for i, cb_refl in enumerate(refl.constantBlocks):
|
||||
try:
|
||||
cb_bind = state.GetConstantBlock(stage_enum, i, 0)
|
||||
cbs.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": cb_refl.name,
|
||||
"byte_size": cb_refl.byteSize,
|
||||
"resource_id": str(cb_bind.descriptor.resource),
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
cbs.append(
|
||||
{"index": i, "name": cb_refl.name, "error": "failed to read binding"}
|
||||
)
|
||||
bindings["constant_buffers"] = cbs
|
||||
|
||||
ros = []
|
||||
try:
|
||||
all_ro_binds = state.GetReadOnlyResources(stage_enum)
|
||||
ro_by_index: dict = {}
|
||||
for b in all_ro_binds:
|
||||
ro_by_index.setdefault(b.access.index, []).append(b)
|
||||
except Exception as e:
|
||||
all_ro_binds = None
|
||||
ro_error = f"{type(e).__name__}: {e}"
|
||||
|
||||
for i, ro_refl in enumerate(refl.readOnlyResources):
|
||||
if all_ro_binds is None:
|
||||
ros.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": ro_refl.name,
|
||||
"error": f"failed to read bindings: {ro_error}",
|
||||
}
|
||||
)
|
||||
continue
|
||||
entries = []
|
||||
for b in ro_by_index.get(i, []):
|
||||
entries.append({"resource_id": str(b.descriptor.resource)})
|
||||
ros.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": ro_refl.name,
|
||||
"type": str(ro_refl.textureType),
|
||||
"bindings": entries,
|
||||
}
|
||||
)
|
||||
bindings["read_only_resources"] = ros
|
||||
|
||||
rws = []
|
||||
try:
|
||||
all_rw_binds = state.GetReadWriteResources(stage_enum)
|
||||
rw_by_index: dict = {}
|
||||
for b in all_rw_binds:
|
||||
rw_by_index.setdefault(b.access.index, []).append(b)
|
||||
except Exception as e:
|
||||
all_rw_binds = None
|
||||
rw_error = f"{type(e).__name__}: {e}"
|
||||
|
||||
for i, rw_refl in enumerate(refl.readWriteResources):
|
||||
if all_rw_binds is None:
|
||||
rws.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": rw_refl.name,
|
||||
"error": f"failed to read bindings: {rw_error}",
|
||||
}
|
||||
)
|
||||
continue
|
||||
entries = []
|
||||
for b in rw_by_index.get(i, []):
|
||||
entries.append({"resource_id": str(b.descriptor.resource)})
|
||||
rws.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": rw_refl.name,
|
||||
"type": str(rw_refl.textureType),
|
||||
"bindings": entries,
|
||||
}
|
||||
)
|
||||
bindings["read_write_resources"] = rws
|
||||
|
||||
samplers = []
|
||||
try:
|
||||
all_sampler_binds = state.GetSamplers(stage_enum)
|
||||
s_by_index: dict = {}
|
||||
for b in all_sampler_binds:
|
||||
s_by_index.setdefault(b.access.index, []).append(b)
|
||||
except Exception as e:
|
||||
all_sampler_binds = None
|
||||
s_error = f"{type(e).__name__}: {e}"
|
||||
|
||||
for i, s_refl in enumerate(refl.samplers):
|
||||
if all_sampler_binds is None:
|
||||
samplers.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": s_refl.name,
|
||||
"error": f"failed to read bindings: {s_error}",
|
||||
}
|
||||
)
|
||||
continue
|
||||
entries = []
|
||||
for b in s_by_index.get(i, []):
|
||||
entries.append({"resource_id": str(b.descriptor.resource)})
|
||||
samplers.append(
|
||||
{
|
||||
"index": i,
|
||||
"name": s_refl.name,
|
||||
"bindings": entries,
|
||||
}
|
||||
)
|
||||
bindings["samplers"] = samplers
|
||||
|
||||
return bindings
|
||||
|
||||
|
||||
def get_vertex_inputs(event_id: Optional[int] = None) -> dict:
|
||||
"""Get vertex input layout and buffer bindings at the current event.
|
||||
|
||||
Shows vertex attributes (name, format, offset), vertex buffer bindings, and index buffer.
|
||||
|
||||
Args:
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
|
||||
ib = state.GetIBuffer()
|
||||
ib_info = {
|
||||
"resource_id": str(ib.resourceId),
|
||||
"byte_offset": ib.byteOffset,
|
||||
"byte_stride": ib.byteStride,
|
||||
}
|
||||
|
||||
vbs = state.GetVBuffers()
|
||||
vb_list = []
|
||||
for i, vb in enumerate(vbs):
|
||||
if int(vb.resourceId) == 0:
|
||||
continue
|
||||
vb_list.append(
|
||||
{
|
||||
"slot": i,
|
||||
"resource_id": str(vb.resourceId),
|
||||
"byte_offset": vb.byteOffset,
|
||||
"byte_stride": vb.byteStride,
|
||||
}
|
||||
)
|
||||
|
||||
attrs = state.GetVertexInputs()
|
||||
attr_list = []
|
||||
for a in attrs:
|
||||
attr_list.append(
|
||||
{
|
||||
"name": a.name,
|
||||
"vertex_buffer": a.vertexBuffer,
|
||||
"byte_offset": a.byteOffset,
|
||||
"per_instance": a.perInstance,
|
||||
"instance_rate": a.instanceRate,
|
||||
"format": str(a.format.Name()),
|
||||
}
|
||||
)
|
||||
|
||||
return {
|
||||
"event_id": session.current_event,
|
||||
"index_buffer": ib_info,
|
||||
"vertex_buffers": vb_list,
|
||||
"vertex_attributes": attr_list,
|
||||
}
|
||||
|
||||
|
||||
def get_draw_call_state(event_id: int) -> dict:
|
||||
"""Get complete draw call state in a single call.
|
||||
|
||||
Returns action info, blend, depth, stencil, rasterizer, bound textures,
|
||||
render targets, and shader summaries — everything needed to understand a draw call.
|
||||
|
||||
Args:
|
||||
event_id: The event ID of the draw call to inspect.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
result = _get_draw_state_dict(session, event_id)
|
||||
if "error" in result:
|
||||
return result
|
||||
return result
|
||||
|
||||
|
||||
def _get_draw_state_dict(session, event_id: int) -> dict:
|
||||
"""Internal helper: build a complete draw call state dict.
|
||||
|
||||
Shared by get_draw_call_state and diff_draw_calls.
|
||||
"""
|
||||
err = session.set_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
action = session.get_action(event_id)
|
||||
if action is None:
|
||||
return make_error(f"Event ID {event_id} not found", "INVALID_EVENT_ID")
|
||||
|
||||
sf = session.structured_file
|
||||
state = session.controller.GetPipelineState()
|
||||
|
||||
warnings: list = []
|
||||
|
||||
result: dict = {
|
||||
"event_id": event_id,
|
||||
"action_name": action.GetName(sf),
|
||||
"vertex_count": action.numIndices,
|
||||
"instance_count": action.numInstances,
|
||||
}
|
||||
|
||||
# Topology
|
||||
try:
|
||||
topo = state.GetPrimitiveTopology()
|
||||
result["topology"] = enum_str(topo, TOPOLOGY_MAP, "Topology.")
|
||||
except Exception as e:
|
||||
result["topology"] = "Unknown"
|
||||
warnings.append(f"topology: {type(e).__name__}: {e}")
|
||||
|
||||
# Blend state (first enabled blend target)
|
||||
try:
|
||||
cbs = state.GetColorBlends()
|
||||
if cbs:
|
||||
b = cbs[0]
|
||||
color_eq = _serialize_blend_eq(b.colorBlend)
|
||||
alpha_eq = _serialize_blend_eq(b.alphaBlend)
|
||||
blend_info: dict = {
|
||||
"enabled": b.enabled,
|
||||
"color_src": color_eq["source"],
|
||||
"color_dst": color_eq["destination"],
|
||||
"color_op": color_eq["operation"],
|
||||
"alpha_src": alpha_eq["source"],
|
||||
"alpha_dst": alpha_eq["destination"],
|
||||
"alpha_op": alpha_eq["operation"],
|
||||
}
|
||||
if b.enabled:
|
||||
blend_info["formula"] = blend_formula(
|
||||
color_eq["source"],
|
||||
color_eq["destination"],
|
||||
color_eq["operation"],
|
||||
alpha_eq["source"],
|
||||
alpha_eq["destination"],
|
||||
alpha_eq["operation"],
|
||||
)
|
||||
result["blend"] = blend_info
|
||||
except Exception as e:
|
||||
warnings.append(f"blend: {type(e).__name__}: {e}")
|
||||
|
||||
# Stencil state
|
||||
try:
|
||||
sf = state.GetStencilFaces()
|
||||
if sf:
|
||||
front = sf[0]
|
||||
result["stencil"] = {
|
||||
"enabled": True,
|
||||
"front_func": enum_str(
|
||||
front.function, COMPARE_FUNC_MAP, "CompareFunc."
|
||||
),
|
||||
"front_fail_op": enum_str(
|
||||
front.failOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"front_pass_op": enum_str(
|
||||
front.passOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
"front_depth_fail_op": enum_str(
|
||||
front.depthFailOperation, STENCIL_OP_MAP, "StencilOp."
|
||||
),
|
||||
}
|
||||
except Exception as e:
|
||||
warnings.append(f"stencil: {type(e).__name__}: {e}")
|
||||
|
||||
# Textures bound to pixel shader
|
||||
textures = []
|
||||
try:
|
||||
ps_refl = state.GetShaderReflection(rd.ShaderStage.Pixel)
|
||||
if ps_refl is not None:
|
||||
all_ro = state.GetReadOnlyResources(rd.ShaderStage.Pixel)
|
||||
ro_by_index: dict = {}
|
||||
for b in all_ro:
|
||||
ro_by_index.setdefault(b.access.index, []).append(b)
|
||||
for i, ro_refl in enumerate(ps_refl.readOnlyResources):
|
||||
for b in ro_by_index.get(i, []):
|
||||
rid_str = str(b.descriptor.resource)
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
tex_entry: dict = {
|
||||
"slot": i,
|
||||
"name": ro_refl.name,
|
||||
"resource_id": rid_str,
|
||||
}
|
||||
if tex_desc is not None:
|
||||
tex_entry["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
tex_entry["format"] = str(tex_desc.format.Name())
|
||||
textures.append(tex_entry)
|
||||
except Exception as e:
|
||||
warnings.append(f"textures: {type(e).__name__}: {e}")
|
||||
result["textures"] = textures
|
||||
|
||||
# Render targets
|
||||
render_targets = []
|
||||
try:
|
||||
outputs = state.GetOutputTargets()
|
||||
for o in outputs:
|
||||
if int(o.resource) == 0:
|
||||
continue
|
||||
rid_str = str(o.resource)
|
||||
rt_entry: dict = {"resource_id": rid_str}
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
if tex_desc is not None:
|
||||
rt_entry["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
rt_entry["format"] = str(tex_desc.format.Name())
|
||||
render_targets.append(rt_entry)
|
||||
except Exception as e:
|
||||
warnings.append(f"render_targets: {type(e).__name__}: {e}")
|
||||
result["render_targets"] = render_targets
|
||||
|
||||
# Depth target
|
||||
try:
|
||||
dt = state.GetDepthTarget()
|
||||
if int(dt.resource) != 0:
|
||||
rid_str = str(dt.resource)
|
||||
dt_entry: dict = {"resource_id": rid_str}
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
if tex_desc is not None:
|
||||
dt_entry["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
dt_entry["format"] = str(tex_desc.format.Name())
|
||||
result["depth_target"] = dt_entry
|
||||
except Exception as e:
|
||||
warnings.append(f"depth_target: {type(e).__name__}: {e}")
|
||||
|
||||
if warnings:
|
||||
result["warnings"] = warnings
|
||||
|
||||
# Shader summaries
|
||||
shaders = {}
|
||||
for sname, sstage in SHADER_STAGE_MAP.items():
|
||||
if sstage == rd.ShaderStage.Compute:
|
||||
continue
|
||||
refl = state.GetShaderReflection(sstage)
|
||||
if refl is None:
|
||||
continue
|
||||
info: dict = {
|
||||
"resource_id": str(refl.resourceId),
|
||||
"entry_point": state.GetShaderEntryPoint(sstage),
|
||||
}
|
||||
if sname == "vertex":
|
||||
info["inputs"] = [
|
||||
s.semanticIdxName or s.varName for s in refl.inputSignature
|
||||
]
|
||||
if sname == "pixel":
|
||||
info["texture_count"] = len(refl.readOnlyResources)
|
||||
shaders[sname] = info
|
||||
result["shaders"] = shaders
|
||||
|
||||
return result
|
||||
136
engine/tools/renderdoc_parser/tools/resource_tools.py
Normal file
136
engine/tools/renderdoc_parser/tools/resource_tools.py
Normal file
@@ -0,0 +1,136 @@
|
||||
"""Resource analysis tools: list_textures, list_buffers, list_resources, get_resource_usage."""
|
||||
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
serialize_texture_desc,
|
||||
serialize_buffer_desc,
|
||||
serialize_usage_entry,
|
||||
)
|
||||
|
||||
|
||||
def list_textures(
|
||||
filter_format: Optional[str] = None, min_width: Optional[int] = None
|
||||
) -> dict:
|
||||
"""List all textures in the capture.
|
||||
|
||||
Args:
|
||||
filter_format: Optional format substring to filter by (e.g. "R8G8B8A8", "BC7").
|
||||
min_width: Optional minimum width to filter by.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
textures = session.controller.GetTextures()
|
||||
results = []
|
||||
for tex in textures:
|
||||
if min_width is not None and tex.width < min_width:
|
||||
continue
|
||||
fmt_name = str(tex.format.Name())
|
||||
if filter_format and filter_format.upper() not in fmt_name.upper():
|
||||
continue
|
||||
results.append(serialize_texture_desc(tex))
|
||||
|
||||
return {"textures": results, "count": len(results)}
|
||||
|
||||
|
||||
def list_buffers(min_size: Optional[int] = None) -> dict:
|
||||
"""List all buffers in the capture.
|
||||
|
||||
Args:
|
||||
min_size: Optional minimum byte size to filter by.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
buffers = session.controller.GetBuffers()
|
||||
results = []
|
||||
for buf in buffers:
|
||||
if min_size is not None and buf.length < min_size:
|
||||
continue
|
||||
results.append(serialize_buffer_desc(buf))
|
||||
|
||||
return {"buffers": results, "count": len(results)}
|
||||
|
||||
|
||||
def list_resources(
|
||||
type_filter: Optional[str] = None, name_pattern: Optional[str] = None
|
||||
) -> dict:
|
||||
"""List all named resources in the capture.
|
||||
|
||||
Args:
|
||||
type_filter: Optional resource type substring filter (e.g. "Texture", "Buffer").
|
||||
name_pattern: Optional regex pattern to filter resource names (case-insensitive).
|
||||
"""
|
||||
import re
|
||||
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
resources = session.controller.GetResources()
|
||||
pattern = None
|
||||
if name_pattern:
|
||||
try:
|
||||
pattern = re.compile(name_pattern, re.IGNORECASE)
|
||||
except re.error as e:
|
||||
return make_error(f"Invalid regex pattern: {e}", "API_ERROR")
|
||||
results = []
|
||||
|
||||
for res in resources:
|
||||
if type_filter and type_filter.lower() not in str(res.type).lower():
|
||||
continue
|
||||
try:
|
||||
name = res.name
|
||||
except Exception:
|
||||
name = f"<unreadable name for {str(res.resourceId)}>"
|
||||
if not isinstance(name, str):
|
||||
try:
|
||||
name = name.decode("utf-8", errors="replace")
|
||||
except Exception:
|
||||
name = repr(name)
|
||||
if pattern and not pattern.search(name):
|
||||
continue
|
||||
results.append(
|
||||
{
|
||||
"resource_id": str(res.resourceId),
|
||||
"name": name,
|
||||
"type": str(res.type),
|
||||
}
|
||||
)
|
||||
|
||||
return {"resources": results, "count": len(results)}
|
||||
|
||||
|
||||
def get_resource_usage(resource_id: str) -> dict:
|
||||
"""Get the usage history of a resource across all events.
|
||||
|
||||
Shows which events read from or write to this resource.
|
||||
|
||||
Args:
|
||||
resource_id: The resource ID string (as returned by other tools).
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
rid = session.resolve_resource_id(resource_id)
|
||||
if rid is None:
|
||||
return make_error(
|
||||
f"Resource ID '{resource_id}' not found", "INVALID_RESOURCE_ID"
|
||||
)
|
||||
|
||||
usages = session.controller.GetUsage(rid)
|
||||
results = [serialize_usage_entry(u) for u in usages]
|
||||
|
||||
return {"resource_id": resource_id, "usages": results, "count": len(results)}
|
||||
217
engine/tools/renderdoc_parser/tools/session_tools.py
Normal file
217
engine/tools/renderdoc_parser/tools/session_tools.py
Normal file
@@ -0,0 +1,217 @@
|
||||
"""Session management tools: open_capture, close_capture, get_capture_info, get_frame_overview."""
|
||||
|
||||
|
||||
import os
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import rd, make_error, flags_to_list
|
||||
|
||||
|
||||
def _get_gpu_quirks(driver_name: str) -> list:
|
||||
"""Return known GPU/API quirks based on the driver name string."""
|
||||
quirks: list = []
|
||||
name_upper = driver_name.upper()
|
||||
|
||||
if "ADRENO" in name_upper:
|
||||
quirks.extend(
|
||||
[
|
||||
"Adreno: mediump float 实际精度可能低于 spec 要求,关键计算建议强制 highp",
|
||||
"Adreno: R11G11B10_FLOAT 没有符号位,写入负值会被 clamp 到 0 或产生未定义行为",
|
||||
"Adreno: 某些驱动版本 textureLod 在 fragment shader 中存在 bug",
|
||||
]
|
||||
)
|
||||
elif "MALI" in name_upper:
|
||||
quirks.extend(
|
||||
[
|
||||
"Mali: R11G11B10_FLOAT 存储负值行为未定义",
|
||||
"Mali: 大量 discard 指令会导致 early-Z 失效,影响 tile-based 性能",
|
||||
"Mali: 某些型号 mediump 向量运算累积精度问题",
|
||||
]
|
||||
)
|
||||
elif "POWERVR" in name_upper or "IMAGINATION" in name_upper:
|
||||
quirks.extend(
|
||||
[
|
||||
"PowerVR: Tile-based deferred rendering,避免不必要的 framebuffer load/store",
|
||||
"PowerVR: 避免大量纹理采样器绑定",
|
||||
]
|
||||
)
|
||||
elif "APPLE" in name_upper:
|
||||
quirks.extend(
|
||||
[
|
||||
"Apple GPU: High-efficiency TBDR,注意 tile memory 带宽",
|
||||
"Apple: float16 在 Metal 上性能优秀,建议用于后处理",
|
||||
]
|
||||
)
|
||||
|
||||
if "OPENGL ES" in name_upper or "GLES" in name_upper:
|
||||
quirks.append("OpenGL ES: 注意精度限定符 (highp/mediump/lowp) 的正确使用")
|
||||
quirks.append("OpenGL ES: 扩展依赖需检查 GL_EXTENSIONS 兼容性")
|
||||
|
||||
return quirks
|
||||
|
||||
|
||||
def open_capture(filepath: str) -> dict:
|
||||
"""Open a RenderDoc capture (.rdc) file for analysis.
|
||||
|
||||
Automatically closes any previously opened capture.
|
||||
Returns capture overview: API type, action count, resource counts.
|
||||
|
||||
Args:
|
||||
filepath: Absolute path to the .rdc capture file.
|
||||
"""
|
||||
filepath = os.path.normpath(filepath)
|
||||
session = get_session()
|
||||
return session.open(filepath)
|
||||
|
||||
|
||||
def close_capture() -> dict:
|
||||
"""Close the currently open capture file and free resources."""
|
||||
session = get_session()
|
||||
return session.close()
|
||||
|
||||
|
||||
def get_capture_info() -> dict:
|
||||
"""Get information about the currently open capture.
|
||||
|
||||
Returns API type, file path, action count, texture/buffer counts,
|
||||
and known GPU quirks based on the detected driver/GPU.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
controller = session.controller
|
||||
textures = controller.GetTextures()
|
||||
buffers = controller.GetBuffers()
|
||||
|
||||
driver_name = session.driver_name
|
||||
|
||||
rt_draw_counts = {}
|
||||
for _eid, action in session.action_map.items():
|
||||
if action.flags & rd.ActionFlags.Drawcall:
|
||||
for o in action.outputs:
|
||||
if int(o) != 0:
|
||||
key = str(o)
|
||||
rt_draw_counts[key] = rt_draw_counts.get(key, 0) + 1
|
||||
resolution = "unknown"
|
||||
main_color_format = "unknown"
|
||||
if rt_draw_counts:
|
||||
top_rid = max(rt_draw_counts, key=lambda k: rt_draw_counts[k])
|
||||
tex_desc = session.get_texture_desc(top_rid)
|
||||
if tex_desc is not None:
|
||||
resolution = f"{tex_desc.width}x{tex_desc.height}"
|
||||
main_color_format = str(tex_desc.format.Name())
|
||||
|
||||
return {
|
||||
"filepath": session.filepath,
|
||||
"api": driver_name,
|
||||
"resolution": resolution,
|
||||
"main_color_format": main_color_format,
|
||||
"total_actions": len(session.action_map),
|
||||
"textures": len(textures),
|
||||
"buffers": len(buffers),
|
||||
"current_event": session.current_event,
|
||||
"known_gpu_quirks": _get_gpu_quirks(driver_name),
|
||||
}
|
||||
|
||||
|
||||
def get_frame_overview() -> dict:
|
||||
"""Get a frame-level statistics overview of the current capture.
|
||||
|
||||
Returns action counts by type, texture/buffer memory totals,
|
||||
main render targets with draw counts, and estimated resolution.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
|
||||
controller = session.controller
|
||||
|
||||
draw_calls = 0
|
||||
clears = 0
|
||||
dispatches = 0
|
||||
for _eid, action in session.action_map.items():
|
||||
if action.flags & rd.ActionFlags.Drawcall:
|
||||
draw_calls += 1
|
||||
if action.flags & rd.ActionFlags.Clear:
|
||||
clears += 1
|
||||
if action.flags & rd.ActionFlags.Dispatch:
|
||||
dispatches += 1
|
||||
|
||||
textures = controller.GetTextures()
|
||||
tex_total_bytes = 0
|
||||
for tex in textures:
|
||||
bpp = 4
|
||||
fmt_name = str(tex.format.Name()).upper()
|
||||
if "R16G16B16A16" in fmt_name:
|
||||
bpp = 8
|
||||
elif "R32G32B32A32" in fmt_name:
|
||||
bpp = 16
|
||||
elif "R8G8B8A8" in fmt_name or "B8G8R8A8" in fmt_name:
|
||||
bpp = 4
|
||||
elif "R16G16" in fmt_name:
|
||||
bpp = 4
|
||||
elif "R32" in fmt_name and "G32" not in fmt_name:
|
||||
bpp = 4
|
||||
elif "R16" in fmt_name and "G16" not in fmt_name:
|
||||
bpp = 2
|
||||
elif "R8" in fmt_name and "G8" not in fmt_name:
|
||||
bpp = 1
|
||||
elif "BC" in fmt_name or "ETC" in fmt_name or "ASTC" in fmt_name:
|
||||
bpp = 1
|
||||
elif "D24" in fmt_name or "D32" in fmt_name:
|
||||
bpp = 4
|
||||
elif "D16" in fmt_name:
|
||||
bpp = 2
|
||||
size = tex.width * tex.height * max(tex.depth, 1) * max(tex.arraysize, 1) * bpp
|
||||
if tex.mips > 1:
|
||||
size = int(size * 1.33)
|
||||
tex_total_bytes += size
|
||||
|
||||
buffers = controller.GetBuffers()
|
||||
buf_total_bytes = sum(buf.length for buf in buffers)
|
||||
|
||||
rt_draw_counts = {}
|
||||
for _eid, action in sorted(session.action_map.items()):
|
||||
if not (action.flags & rd.ActionFlags.Drawcall):
|
||||
continue
|
||||
for o in action.outputs:
|
||||
if int(o) != 0:
|
||||
key = str(o)
|
||||
rt_draw_counts[key] = rt_draw_counts.get(key, 0) + 1
|
||||
|
||||
render_targets = []
|
||||
for rid_str, count in sorted(rt_draw_counts.items(), key=lambda x: -x[1]):
|
||||
rt_entry: dict = {"resource_id": rid_str, "draw_count": count}
|
||||
tex_desc = session.get_texture_desc(rid_str)
|
||||
if tex_desc is not None:
|
||||
rt_entry["size"] = f"{tex_desc.width}x{tex_desc.height}"
|
||||
rt_entry["format"] = str(tex_desc.format.Name())
|
||||
render_targets.append(rt_entry)
|
||||
|
||||
resolution = "unknown"
|
||||
if render_targets:
|
||||
top_rt = render_targets[0]
|
||||
if "size" in top_rt:
|
||||
resolution = top_rt["size"]
|
||||
|
||||
return {
|
||||
"filepath": session.filepath,
|
||||
"api": session.driver_name,
|
||||
"resolution": resolution,
|
||||
"total_actions": len(session.action_map),
|
||||
"draw_calls": draw_calls,
|
||||
"clears": clears,
|
||||
"dispatches": dispatches,
|
||||
"textures": {
|
||||
"count": len(textures),
|
||||
"total_memory_mb": round(tex_total_bytes / (1024 * 1024), 2),
|
||||
},
|
||||
"buffers": {
|
||||
"count": len(buffers),
|
||||
"total_memory_mb": round(buf_total_bytes / (1024 * 1024), 2),
|
||||
},
|
||||
"render_targets": render_targets,
|
||||
}
|
||||
333
engine/tools/renderdoc_parser/tools/shader_tools.py
Normal file
333
engine/tools/renderdoc_parser/tools/shader_tools.py
Normal file
@@ -0,0 +1,333 @@
|
||||
"""Shader analysis tools: disassemble_shader, get_shader_reflection, get_cbuffer_contents."""
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from ..session import get_session
|
||||
from ..util import (
|
||||
rd,
|
||||
make_error,
|
||||
SHADER_STAGE_MAP,
|
||||
serialize_shader_variable,
|
||||
serialize_sig_element,
|
||||
)
|
||||
|
||||
|
||||
def _reflection_fallback(stage: str, refl) -> dict:
|
||||
result: dict = {
|
||||
"stage": stage,
|
||||
"source_type": "reflection_only",
|
||||
"resource_id": str(refl.resourceId),
|
||||
"entry_point": refl.entryPoint,
|
||||
"input_signature": [serialize_sig_element(s) for s in refl.inputSignature],
|
||||
"output_signature": [serialize_sig_element(s) for s in refl.outputSignature],
|
||||
"constant_blocks": [
|
||||
{"name": cb.name, "byte_size": cb.byteSize} for cb in refl.constantBlocks
|
||||
],
|
||||
"read_only_resources": [
|
||||
{"name": ro.name, "type": str(ro.textureType)}
|
||||
for ro in refl.readOnlyResources
|
||||
],
|
||||
"read_write_resources": [
|
||||
{"name": rw.name, "type": str(rw.textureType)}
|
||||
for rw in refl.readWriteResources
|
||||
],
|
||||
"note": "Disassembly unavailable — showing reflection data only",
|
||||
}
|
||||
return result
|
||||
|
||||
|
||||
def disassemble_shader(
|
||||
stage: str,
|
||||
target: Optional[str] = None,
|
||||
event_id: Optional[int] = None,
|
||||
line_range: Optional[list] = None,
|
||||
search: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Disassemble the shader bound at the specified stage.
|
||||
|
||||
If target is omitted, tries all available targets in order and returns the
|
||||
first successful result. Falls back to reflection info if all disassembly fails.
|
||||
|
||||
Args:
|
||||
stage: Shader stage (vertex, hull, domain, geometry, pixel, compute).
|
||||
target: Disassembly target/format. If omitted, tries all available targets.
|
||||
event_id: Optional event ID to navigate to first.
|
||||
line_range: [start_line, end_line] (1-based). Only return lines in this range.
|
||||
search: Keyword to search in the disassembly. Returns matching lines with
|
||||
5 lines of context before and after each match.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
stage_enum = SHADER_STAGE_MAP.get(stage.lower())
|
||||
if stage_enum is None:
|
||||
return make_error(f"Unknown shader stage: {stage}", "API_ERROR")
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
refl = state.GetShaderReflection(stage_enum)
|
||||
if refl is None:
|
||||
return make_error(f"No shader bound at stage '{stage}'", "API_ERROR")
|
||||
|
||||
if stage_enum == rd.ShaderStage.Compute:
|
||||
pipe = state.GetComputePipelineObject()
|
||||
else:
|
||||
pipe = state.GetGraphicsPipelineObject()
|
||||
targets = session.controller.GetDisassemblyTargets(True)
|
||||
|
||||
if not targets:
|
||||
return _reflection_fallback(stage, refl)
|
||||
|
||||
disasm: Optional[str] = None
|
||||
used_target: Optional[str] = None
|
||||
|
||||
if target is not None:
|
||||
if target not in targets:
|
||||
return make_error(
|
||||
f"Unknown disassembly target: {target}. Available: {targets}",
|
||||
"API_ERROR",
|
||||
)
|
||||
disasm = session.controller.DisassembleShader(pipe, refl, target)
|
||||
used_target = target
|
||||
else:
|
||||
for t in targets:
|
||||
try:
|
||||
d = session.controller.DisassembleShader(pipe, refl, t)
|
||||
if d and d.strip():
|
||||
disasm = d
|
||||
used_target = t
|
||||
break
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if not disasm:
|
||||
result = _reflection_fallback(stage, refl)
|
||||
result["available_targets"] = list(targets)
|
||||
return result
|
||||
|
||||
all_lines = disasm.splitlines()
|
||||
total_lines = len(all_lines)
|
||||
|
||||
if line_range is not None and len(line_range) == 2:
|
||||
start = max(1, line_range[0]) - 1
|
||||
end = min(total_lines, line_range[1])
|
||||
filtered_lines = all_lines[start:end]
|
||||
disasm_out = "\n".join(filtered_lines)
|
||||
note = f"Showing lines {start + 1}–{end} of {total_lines}"
|
||||
elif search is not None:
|
||||
kw = search.lower()
|
||||
context = 5
|
||||
include: set[int] = set()
|
||||
for i, line in enumerate(all_lines):
|
||||
if kw in line.lower():
|
||||
for j in range(max(0, i - context), min(total_lines, i + context + 1)):
|
||||
include.add(j)
|
||||
if not include:
|
||||
note = f"No matches for '{search}' in {total_lines} lines"
|
||||
disasm_out = ""
|
||||
else:
|
||||
result_lines = []
|
||||
prev = -2
|
||||
for i in sorted(include):
|
||||
if i > prev + 1:
|
||||
result_lines.append("...")
|
||||
result_lines.append(f"{i + 1:4d}: {all_lines[i]}")
|
||||
prev = i
|
||||
disasm_out = "\n".join(result_lines)
|
||||
note = f"Found '{search}' in {total_lines} total lines"
|
||||
else:
|
||||
disasm_out = disasm
|
||||
note = None
|
||||
|
||||
result: dict = {
|
||||
"stage": stage,
|
||||
"target": used_target,
|
||||
"available_targets": list(targets),
|
||||
"source_type": "disasm",
|
||||
"total_lines": total_lines,
|
||||
"disassembly": disasm_out,
|
||||
}
|
||||
if note:
|
||||
result["note"] = note
|
||||
return result
|
||||
|
||||
|
||||
def get_shader_reflection(
|
||||
stage: str,
|
||||
event_id: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Get reflection information for the shader at the specified stage.
|
||||
|
||||
Returns input/output signatures, constant buffer layouts, and resource bindings.
|
||||
|
||||
Args:
|
||||
stage: Shader stage (vertex, hull, domain, geometry, pixel, compute).
|
||||
event_id: Optional event ID to navigate to first.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
stage_enum = SHADER_STAGE_MAP.get(stage.lower())
|
||||
if stage_enum is None:
|
||||
return make_error(f"Unknown shader stage: {stage}", "API_ERROR")
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
refl = state.GetShaderReflection(stage_enum)
|
||||
if refl is None:
|
||||
return make_error(f"No shader bound at stage '{stage}'", "API_ERROR")
|
||||
|
||||
result: dict = {
|
||||
"stage": stage,
|
||||
"resource_id": str(refl.resourceId),
|
||||
"entry_point": refl.entryPoint,
|
||||
}
|
||||
|
||||
result["input_signature"] = [serialize_sig_element(s) for s in refl.inputSignature]
|
||||
result["output_signature"] = [
|
||||
serialize_sig_element(s) for s in refl.outputSignature
|
||||
]
|
||||
|
||||
cbs = []
|
||||
for cb in refl.constantBlocks:
|
||||
cbs.append(
|
||||
{
|
||||
"name": cb.name,
|
||||
"byte_size": cb.byteSize,
|
||||
"bind_point": cb.fixedBindNumber,
|
||||
"variables_count": len(cb.variables),
|
||||
}
|
||||
)
|
||||
result["constant_blocks"] = cbs
|
||||
|
||||
ros = []
|
||||
for ro in refl.readOnlyResources:
|
||||
ros.append(
|
||||
{
|
||||
"name": ro.name,
|
||||
"type": str(ro.textureType),
|
||||
"bind_point": ro.fixedBindNumber,
|
||||
}
|
||||
)
|
||||
result["read_only_resources"] = ros
|
||||
|
||||
rws = []
|
||||
for rw in refl.readWriteResources:
|
||||
rws.append(
|
||||
{
|
||||
"name": rw.name,
|
||||
"type": str(rw.textureType),
|
||||
"bind_point": rw.fixedBindNumber,
|
||||
}
|
||||
)
|
||||
result["read_write_resources"] = rws
|
||||
|
||||
samplers = []
|
||||
for s in refl.samplers:
|
||||
samplers.append(
|
||||
{
|
||||
"name": s.name,
|
||||
"bind_point": s.fixedBindNumber,
|
||||
}
|
||||
)
|
||||
result["samplers"] = samplers
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_cbuffer_contents(
|
||||
stage: str,
|
||||
cbuffer_index: int,
|
||||
event_id: Optional[int] = None,
|
||||
filter: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Get the actual values of a constant buffer at the specified shader stage.
|
||||
|
||||
Returns a tree of variable names and their current values.
|
||||
|
||||
Args:
|
||||
stage: Shader stage (vertex, hull, domain, geometry, pixel, compute).
|
||||
cbuffer_index: Index of the constant buffer (from get_shader_reflection).
|
||||
event_id: Optional event ID to navigate to first.
|
||||
filter: Case-insensitive substring to match variable names
|
||||
(e.g. "ibl", "exposure", "taa", "reflection"). Only matching
|
||||
variables are returned.
|
||||
"""
|
||||
session = get_session()
|
||||
err = session.require_open()
|
||||
if err:
|
||||
return err
|
||||
err = session.ensure_event(event_id)
|
||||
if err:
|
||||
return err
|
||||
|
||||
stage_enum = SHADER_STAGE_MAP.get(stage.lower())
|
||||
if stage_enum is None:
|
||||
return make_error(f"Unknown shader stage: {stage}", "API_ERROR")
|
||||
|
||||
state = session.controller.GetPipelineState()
|
||||
refl = state.GetShaderReflection(stage_enum)
|
||||
if refl is None:
|
||||
return make_error(f"No shader bound at stage '{stage}'", "API_ERROR")
|
||||
|
||||
num_cbs = len(refl.constantBlocks)
|
||||
if num_cbs == 0:
|
||||
return make_error(
|
||||
f"Shader at stage '{stage}' has no constant buffers",
|
||||
"API_ERROR",
|
||||
)
|
||||
if cbuffer_index < 0 or cbuffer_index >= num_cbs:
|
||||
return make_error(
|
||||
f"cbuffer_index {cbuffer_index} out of range (0-{num_cbs - 1})",
|
||||
"API_ERROR",
|
||||
)
|
||||
|
||||
if stage_enum == rd.ShaderStage.Compute:
|
||||
pipe = state.GetComputePipelineObject()
|
||||
else:
|
||||
pipe = state.GetGraphicsPipelineObject()
|
||||
entry = state.GetShaderEntryPoint(stage_enum)
|
||||
cb_bind = state.GetConstantBlock(stage_enum, cbuffer_index, 0)
|
||||
|
||||
cbuffer_vars = session.controller.GetCBufferVariableContents(
|
||||
pipe,
|
||||
refl.resourceId,
|
||||
stage_enum,
|
||||
entry,
|
||||
cbuffer_index,
|
||||
cb_bind.descriptor.resource,
|
||||
0,
|
||||
0,
|
||||
)
|
||||
|
||||
variables = [serialize_shader_variable(v) for v in cbuffer_vars]
|
||||
|
||||
if filter:
|
||||
kw = filter.lower()
|
||||
|
||||
def _var_matches(var: dict) -> bool:
|
||||
if kw in var.get("name", "").lower():
|
||||
return True
|
||||
for m in var.get("members", []):
|
||||
if _var_matches(m):
|
||||
return True
|
||||
return False
|
||||
|
||||
variables = [v for v in variables if _var_matches(v)]
|
||||
|
||||
return {
|
||||
"stage": stage,
|
||||
"cbuffer_index": cbuffer_index,
|
||||
"cbuffer_name": refl.constantBlocks[cbuffer_index].name,
|
||||
"filter": filter,
|
||||
"variables": variables,
|
||||
"variable_count": len(variables),
|
||||
}
|
||||
532
engine/tools/renderdoc_parser/util.py
Normal file
532
engine/tools/renderdoc_parser/util.py
Normal file
@@ -0,0 +1,532 @@
|
||||
"""Utility functions: renderdoc module loading, serialization helpers, enum mappings."""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from typing import Any
|
||||
|
||||
|
||||
def load_renderdoc():
|
||||
"""Load the renderdoc Python module from third_party directory."""
|
||||
if "renderdoc" in sys.modules:
|
||||
return sys.modules["renderdoc"]
|
||||
|
||||
util_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
third_party_dir = os.path.normpath(
|
||||
os.path.join(util_dir, "..", "..", "third_party", "renderdoc")
|
||||
)
|
||||
pyd_path = os.path.join(third_party_dir, "renderdoc.pyd")
|
||||
|
||||
if sys.platform == "win32" and hasattr(os, "add_dll_directory"):
|
||||
os.add_dll_directory(third_party_dir)
|
||||
|
||||
sys.path.insert(0, third_party_dir)
|
||||
import renderdoc # noqa: E402
|
||||
|
||||
return renderdoc
|
||||
|
||||
|
||||
rd = load_renderdoc()
|
||||
|
||||
# ── Shader stage mapping ──
|
||||
|
||||
SHADER_STAGE_MAP = {
|
||||
"vertex": rd.ShaderStage.Vertex,
|
||||
"hull": rd.ShaderStage.Hull,
|
||||
"domain": rd.ShaderStage.Domain,
|
||||
"geometry": rd.ShaderStage.Geometry,
|
||||
"pixel": rd.ShaderStage.Pixel,
|
||||
"compute": rd.ShaderStage.Compute,
|
||||
}
|
||||
|
||||
# ── FileType mapping ──
|
||||
|
||||
FILE_TYPE_MAP = {
|
||||
"png": rd.FileType.PNG,
|
||||
"jpg": rd.FileType.JPG,
|
||||
"bmp": rd.FileType.BMP,
|
||||
"tga": rd.FileType.TGA,
|
||||
"hdr": rd.FileType.HDR,
|
||||
"exr": rd.FileType.EXR,
|
||||
"dds": rd.FileType.DDS,
|
||||
}
|
||||
|
||||
# ── MeshDataStage mapping ──
|
||||
|
||||
MESH_DATA_STAGE_MAP = {
|
||||
"vsin": rd.MeshDataStage.VSIn,
|
||||
"vsout": rd.MeshDataStage.VSOut,
|
||||
"gsout": rd.MeshDataStage.GSOut,
|
||||
}
|
||||
|
||||
# ── Enum readable-name mappings ──
|
||||
|
||||
BLEND_FACTOR_MAP = {
|
||||
0: "Zero",
|
||||
1: "One",
|
||||
2: "SrcColor",
|
||||
3: "InvSrcColor",
|
||||
4: "DstColor",
|
||||
5: "InvDstColor",
|
||||
6: "SrcAlpha",
|
||||
7: "InvSrcAlpha",
|
||||
8: "DstAlpha",
|
||||
9: "InvDstAlpha",
|
||||
10: "SrcAlphaSat",
|
||||
11: "BlendFactor",
|
||||
12: "InvBlendFactor",
|
||||
13: "Src1Color",
|
||||
14: "InvSrc1Color",
|
||||
15: "Src1Alpha",
|
||||
16: "InvSrc1Alpha",
|
||||
}
|
||||
|
||||
BLEND_OP_MAP = {
|
||||
0: "Add",
|
||||
1: "Subtract",
|
||||
2: "RevSubtract",
|
||||
3: "Min",
|
||||
4: "Max",
|
||||
}
|
||||
|
||||
COMPARE_FUNC_MAP = {
|
||||
0: "AlwaysFalse",
|
||||
1: "Never",
|
||||
2: "Less",
|
||||
3: "LessEqual",
|
||||
4: "Greater",
|
||||
5: "GreaterEqual",
|
||||
6: "Equal",
|
||||
7: "NotEqual",
|
||||
8: "Always",
|
||||
}
|
||||
|
||||
STENCIL_OP_MAP = {
|
||||
0: "Keep",
|
||||
1: "Zero",
|
||||
2: "Replace",
|
||||
3: "IncrSat",
|
||||
4: "DecrSat",
|
||||
5: "Invert",
|
||||
6: "IncrWrap",
|
||||
7: "DecrWrap",
|
||||
}
|
||||
|
||||
CULL_MODE_MAP = {0: "None", 1: "Front", 2: "Back", 3: "FrontAndBack"}
|
||||
|
||||
FILL_MODE_MAP = {0: "Solid", 1: "Wireframe", 2: "Point"}
|
||||
|
||||
TOPOLOGY_MAP = {
|
||||
0: "Unknown",
|
||||
1: "PointList",
|
||||
2: "LineList",
|
||||
3: "LineStrip",
|
||||
4: "TriangleList",
|
||||
5: "TriangleStrip",
|
||||
6: "TriangleFan",
|
||||
7: "LineList_Adj",
|
||||
8: "LineStrip_Adj",
|
||||
9: "TriangleList_Adj",
|
||||
10: "TriangleStrip_Adj",
|
||||
11: "PatchList",
|
||||
}
|
||||
|
||||
VAR_TYPE_MAP = {
|
||||
0: "Float",
|
||||
1: "Double",
|
||||
2: "Half",
|
||||
3: "SInt",
|
||||
4: "UInt",
|
||||
5: "SShort",
|
||||
6: "UShort",
|
||||
7: "SLong",
|
||||
8: "ULong",
|
||||
9: "SByte",
|
||||
10: "UByte",
|
||||
11: "Bool",
|
||||
12: "Enum",
|
||||
13: "GPUPointer",
|
||||
14: "ConstantBlock",
|
||||
15: "Struct",
|
||||
16: "Unknown",
|
||||
}
|
||||
|
||||
SYSTEM_VALUE_MAP = {
|
||||
0: "None",
|
||||
1: "Position",
|
||||
2: "ClipDistance",
|
||||
3: "CullDistance",
|
||||
4: "RTIndex",
|
||||
5: "ViewportIndex",
|
||||
6: "VertexIndex",
|
||||
7: "PrimitiveIndex",
|
||||
8: "InstanceIndex",
|
||||
9: "DispatchThreadIndex",
|
||||
10: "GroupIndex",
|
||||
11: "GroupFlatIndex",
|
||||
12: "GroupThreadIndex",
|
||||
13: "GSInstanceIndex",
|
||||
14: "OutputControlPointIndex",
|
||||
15: "DomainLocation",
|
||||
16: "IsFrontFace",
|
||||
17: "MSAACoverage",
|
||||
18: "MSAASamplePosition",
|
||||
19: "MSAASampleIndex",
|
||||
20: "PatchNumVertices",
|
||||
21: "OuterTessFactor",
|
||||
22: "InsideTessFactor",
|
||||
23: "ColourOutput",
|
||||
24: "DepthOutput",
|
||||
25: "DepthOutputGreaterEqual",
|
||||
26: "DepthOutputLessEqual",
|
||||
}
|
||||
|
||||
TEXTURE_DIM_MAP = {
|
||||
0: "Unknown",
|
||||
1: "Buffer",
|
||||
2: "Texture1D",
|
||||
3: "Texture1DArray",
|
||||
4: "Texture2D",
|
||||
5: "Texture2DArray",
|
||||
6: "Texture2DMS",
|
||||
7: "Texture2DMSArray",
|
||||
8: "Texture3D",
|
||||
9: "TextureCube",
|
||||
10: "TextureCubeArray",
|
||||
}
|
||||
|
||||
RESOURCE_USAGE_MAP = {
|
||||
0: "None",
|
||||
1: "VertexBuffer",
|
||||
2: "IndexBuffer",
|
||||
3: "VS_Constants",
|
||||
4: "HS_Constants",
|
||||
5: "DS_Constants",
|
||||
6: "GS_Constants",
|
||||
7: "PS_Constants",
|
||||
8: "CS_Constants",
|
||||
9: "All_Constants",
|
||||
10: "StreamOut",
|
||||
11: "IndirectArg",
|
||||
16: "VS_Resource",
|
||||
17: "HS_Resource",
|
||||
18: "DS_Resource",
|
||||
19: "GS_Resource",
|
||||
20: "PS_Resource",
|
||||
21: "CS_Resource",
|
||||
22: "All_Resource",
|
||||
32: "VS_RWResource",
|
||||
33: "HS_RWResource",
|
||||
34: "DS_RWResource",
|
||||
35: "GS_RWResource",
|
||||
36: "PS_RWResource",
|
||||
37: "CS_RWResource",
|
||||
38: "All_RWResource",
|
||||
48: "InputTarget",
|
||||
49: "ColorTarget",
|
||||
50: "DepthStencilTarget",
|
||||
64: "Clear",
|
||||
65: "GenMips",
|
||||
66: "Resolve",
|
||||
67: "ResolveSrc",
|
||||
68: "ResolveDst",
|
||||
69: "Copy",
|
||||
70: "CopySrc",
|
||||
71: "CopyDst",
|
||||
72: "Barrier",
|
||||
}
|
||||
|
||||
|
||||
def enum_str(value, mapping: dict, fallback_prefix: str = "") -> str:
|
||||
"""Convert an enum value to a readable string, falling back to str()."""
|
||||
try:
|
||||
int_val = int(value)
|
||||
except (TypeError, ValueError):
|
||||
return str(value)
|
||||
return mapping.get(int_val, f"{fallback_prefix}{value}")
|
||||
|
||||
|
||||
def blend_formula(
|
||||
color_src: str,
|
||||
color_dst: str,
|
||||
color_op: str,
|
||||
alpha_src: str,
|
||||
alpha_dst: str,
|
||||
alpha_op: str,
|
||||
) -> str:
|
||||
"""Generate a human-readable blend formula string."""
|
||||
|
||||
def _op_str(op: str, a: str, b: str) -> str:
|
||||
if op == "Add":
|
||||
return f"{a} + {b}"
|
||||
elif op == "Subtract":
|
||||
return f"{a} - {b}"
|
||||
elif op == "RevSubtract":
|
||||
return f"{b} - {a}"
|
||||
elif op == "Min":
|
||||
return f"min({a}, {b})"
|
||||
elif op == "Max":
|
||||
return f"max({a}, {b})"
|
||||
return f"{a} {op} {b}"
|
||||
|
||||
def _factor(f: str, channel: str) -> str:
|
||||
src, dst = f"src.{channel}", f"dst.{channel}"
|
||||
factor_map = {
|
||||
"Zero": "0",
|
||||
"One": "1",
|
||||
"SrcColor": "src.rgb",
|
||||
"InvSrcColor": "(1-src.rgb)",
|
||||
"DstColor": "dst.rgb",
|
||||
"InvDstColor": "(1-dst.rgb)",
|
||||
"SrcAlpha": "src.a",
|
||||
"InvSrcAlpha": "(1-src.a)",
|
||||
"DstAlpha": "dst.a",
|
||||
"InvDstAlpha": "(1-dst.a)",
|
||||
"SrcAlphaSat": "sat(src.a)",
|
||||
"BlendFactor": "factor",
|
||||
"InvBlendFactor": "(1-factor)",
|
||||
"Src1Color": "src1.rgb",
|
||||
"InvSrc1Color": "(1-src1.rgb)",
|
||||
"Src1Alpha": "src1.a",
|
||||
"InvSrc1Alpha": "(1-src1.a)",
|
||||
}
|
||||
return factor_map.get(f, f)
|
||||
|
||||
c_src = _factor(color_src, "rgb")
|
||||
c_dst = _factor(color_dst, "rgb")
|
||||
color_expr = _op_str(color_op, f"{c_src}*src.rgb", f"{c_dst}*dst.rgb")
|
||||
|
||||
a_src = _factor(alpha_src, "a")
|
||||
a_dst = _factor(alpha_dst, "a")
|
||||
alpha_expr = _op_str(alpha_op, f"{a_src}*src.a", f"{a_dst}*dst.a")
|
||||
|
||||
return f"color: {color_expr} | alpha: {alpha_expr}"
|
||||
|
||||
|
||||
# ── Error helpers ──
|
||||
|
||||
|
||||
def make_error(message: str, code: str = "API_ERROR") -> dict:
|
||||
"""Return a standardized error dict."""
|
||||
return {"error": message, "code": code}
|
||||
|
||||
|
||||
# ── ActionFlags helpers ──
|
||||
|
||||
_ACTION_FLAG_NAMES = None
|
||||
|
||||
|
||||
def _build_flag_names():
|
||||
"""Build a mapping of single-bit flag values to their names."""
|
||||
names = {}
|
||||
for attr in dir(rd.ActionFlags):
|
||||
if attr.startswith("_"):
|
||||
continue
|
||||
val = getattr(rd.ActionFlags, attr)
|
||||
if isinstance(val, int) and val != 0 and (val & (val - 1)) == 0:
|
||||
names[val] = attr
|
||||
return names
|
||||
|
||||
|
||||
def flags_to_list(flags: int):
|
||||
"""Convert an ActionFlags bitmask to a list of flag name strings."""
|
||||
global _ACTION_FLAG_NAMES
|
||||
if _ACTION_FLAG_NAMES is None:
|
||||
_ACTION_FLAG_NAMES = _build_flag_names()
|
||||
result = []
|
||||
for bit, name in _ACTION_FLAG_NAMES.items():
|
||||
if flags & bit:
|
||||
result.append(name)
|
||||
return result
|
||||
|
||||
|
||||
# ── Serialization helpers ──
|
||||
|
||||
|
||||
def serialize_action(
|
||||
action, structured_file, depth: int = 0, max_depth: int = 2
|
||||
) -> dict:
|
||||
"""Serialize an ActionDescription to a dict."""
|
||||
result = {
|
||||
"event_id": action.eventId,
|
||||
"name": action.GetName(structured_file),
|
||||
"flags": flags_to_list(action.flags),
|
||||
"num_indices": action.numIndices,
|
||||
"num_instances": action.numInstances,
|
||||
}
|
||||
outputs = []
|
||||
for o in action.outputs:
|
||||
rid = int(o)
|
||||
if rid != 0:
|
||||
outputs.append(str(o))
|
||||
if outputs:
|
||||
result["outputs"] = outputs
|
||||
depth_id = int(action.depthOut)
|
||||
if depth_id != 0:
|
||||
result["depth_output"] = str(action.depthOut)
|
||||
|
||||
if depth < max_depth and len(action.children) > 0:
|
||||
result["children"] = [
|
||||
serialize_action(c, structured_file, depth + 1, max_depth)
|
||||
for c in action.children
|
||||
]
|
||||
elif len(action.children) > 0:
|
||||
result["children_count"] = len(action.children)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def serialize_action_detail(action, structured_file) -> dict:
|
||||
"""Serialize a single ActionDescription with full detail (no depth limit on self, but no children expansion)."""
|
||||
result = {
|
||||
"event_id": action.eventId,
|
||||
"name": action.GetName(structured_file),
|
||||
"flags": flags_to_list(action.flags),
|
||||
"num_indices": action.numIndices,
|
||||
"num_instances": action.numInstances,
|
||||
"index_offset": action.indexOffset,
|
||||
"base_vertex": action.baseVertex,
|
||||
"vertex_offset": action.vertexOffset,
|
||||
"instance_offset": action.instanceOffset,
|
||||
"drawIndex": action.drawIndex,
|
||||
}
|
||||
outputs = []
|
||||
for o in action.outputs:
|
||||
rid = int(o)
|
||||
if rid != 0:
|
||||
outputs.append(str(o))
|
||||
result["outputs"] = outputs
|
||||
|
||||
depth_id = int(action.depthOut)
|
||||
result["depth_output"] = str(action.depthOut) if depth_id != 0 else None
|
||||
|
||||
if action.parent:
|
||||
result["parent_event_id"] = action.parent.eventId
|
||||
if action.previous:
|
||||
result["previous_event_id"] = action.previous.eventId
|
||||
if action.next:
|
||||
result["next_event_id"] = action.next.eventId
|
||||
result["children_count"] = len(action.children)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def serialize_texture_desc(tex) -> dict:
|
||||
"""Serialize a TextureDescription to a dict."""
|
||||
return {
|
||||
"resource_id": str(tex.resourceId),
|
||||
"name": tex.name if hasattr(tex, "name") else "",
|
||||
"width": tex.width,
|
||||
"height": tex.height,
|
||||
"depth": tex.depth,
|
||||
"array_size": tex.arraysize,
|
||||
"mips": tex.mips,
|
||||
"format": str(tex.format.Name()),
|
||||
"dimension": enum_str(tex.dimension, TEXTURE_DIM_MAP, "Dim."),
|
||||
"msqual": tex.msQual,
|
||||
"mssamp": tex.msSamp,
|
||||
"creation_flags": tex.creationFlags,
|
||||
}
|
||||
|
||||
|
||||
def serialize_buffer_desc(buf) -> dict:
|
||||
"""Serialize a BufferDescription to a dict."""
|
||||
return {
|
||||
"resource_id": str(buf.resourceId),
|
||||
"name": buf.name if hasattr(buf, "name") else "",
|
||||
"length": buf.length,
|
||||
"creation_flags": buf.creationFlags,
|
||||
}
|
||||
|
||||
|
||||
def serialize_resource_desc(res) -> dict:
|
||||
"""Serialize a ResourceDescription to a dict."""
|
||||
try:
|
||||
name = res.name if hasattr(res, "name") else str(res.resourceId)
|
||||
except Exception:
|
||||
name = str(res.resourceId)
|
||||
return {
|
||||
"resource_id": str(res.resourceId),
|
||||
"name": name,
|
||||
"type": str(res.type),
|
||||
}
|
||||
|
||||
|
||||
def _get_var_type_accessor(var):
|
||||
"""Determine the correct value accessor and type name for a ShaderVariable.
|
||||
|
||||
Returns (accessor_attr, type_name) based on var.type.
|
||||
"""
|
||||
try:
|
||||
var_type = int(var.type)
|
||||
except Exception:
|
||||
return "f32v", "float"
|
||||
|
||||
# RenderDoc VarType enum values:
|
||||
# Float=0, Double=1, Half=2, SInt=3, UInt=4, SShort=5, UShort=6,
|
||||
# SLong=7, ULong=8, SByte=9, UByte=10, Bool=11, Enum=12,
|
||||
# GPUPointer=13, ConstantBlock=14, Struct=15, Unknown=16
|
||||
if var_type == 1: # Double
|
||||
return "f64v", "double"
|
||||
elif var_type in (3, 5, 7, 9): # SInt, SShort, SLong, SByte
|
||||
return "s32v", "int"
|
||||
elif var_type in (4, 6, 8, 10): # UInt, UShort, ULong, UByte
|
||||
return "u32v", "uint"
|
||||
elif var_type == 11: # Bool
|
||||
return "u32v", "bool"
|
||||
else: # Float, Half, and all others default to float
|
||||
return "f32v", "float"
|
||||
|
||||
|
||||
def serialize_shader_variable(var, max_depth: int = 10, depth: int = 0) -> dict:
|
||||
"""Recursively serialize a ShaderVariable to a dict."""
|
||||
result = {"name": var.name}
|
||||
if len(var.members) == 0:
|
||||
# Leaf variable - extract values using correct type accessor
|
||||
accessor, type_name = _get_var_type_accessor(var)
|
||||
result["type"] = type_name
|
||||
values = []
|
||||
for r in range(var.rows):
|
||||
row_vals = []
|
||||
for c in range(var.columns):
|
||||
row_vals.append(getattr(var.value, accessor)[r * var.columns + c])
|
||||
values.append(row_vals)
|
||||
# Flatten single-row results
|
||||
if len(values) == 1:
|
||||
result["value"] = values[0]
|
||||
else:
|
||||
result["value"] = values
|
||||
result["rows"] = var.rows
|
||||
result["columns"] = var.columns
|
||||
elif depth < max_depth:
|
||||
result["members"] = [
|
||||
serialize_shader_variable(m, max_depth, depth + 1) for m in var.members
|
||||
]
|
||||
return result
|
||||
|
||||
|
||||
def serialize_usage_entry(usage) -> dict:
|
||||
"""Serialize a single EventUsage entry."""
|
||||
return {
|
||||
"event_id": usage.eventId,
|
||||
"usage": enum_str(usage.usage, RESOURCE_USAGE_MAP, "Usage."),
|
||||
}
|
||||
|
||||
|
||||
def serialize_sig_element(sig) -> dict:
|
||||
"""Serialize a SigParameter (shader signature element)."""
|
||||
return {
|
||||
"var_name": sig.varName,
|
||||
"semantic_name": sig.semanticName,
|
||||
"semantic_index": sig.semanticIndex,
|
||||
"semantic_idx_name": sig.semanticIdxName,
|
||||
"var_type": enum_str(sig.varType, VAR_TYPE_MAP, "VarType."),
|
||||
"comp_count": sig.compCount,
|
||||
"system_value": enum_str(sig.systemValue, SYSTEM_VALUE_MAP, "SysValue."),
|
||||
"reg_index": sig.regIndex,
|
||||
}
|
||||
|
||||
|
||||
def to_json(obj: Any) -> str:
|
||||
"""Serialize to compact JSON string."""
|
||||
return json.dumps(obj, separators=(",", ":"), ensure_ascii=False)
|
||||
Reference in New Issue
Block a user