Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
10 changes: 6 additions & 4 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# 1) Base image with CUDA + cuDNN for GPU support
FROM nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu22.04
# 1) Base image with CUDA + cuDNN *devel* so nvcc is available to compile gsplat CUDA extensions
FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu22.04

# 2) Basic system tools and Python
RUN apt-get update && apt-get install -y python3 python3-pip git wget && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y python3 python3-pip git wget libgl1 ninja-build && rm -rf /var/lib/apt/lists/*

RUN python3 -m pip install --upgrade pip

Expand All @@ -12,7 +12,9 @@ RUN pip install torch torchvision torchaudio --index-url https://download.pytorc
# 4) Nerfstudio, auto_LiRPA, and other Python deps used by this repo
RUN pip install "nerfstudio[full]"

RUN pip install numpy scipy pillow matplotlib tqdm pyyaml torchvision opencv-python graphviz
RUN pip install numpy scipy pillow matplotlib tqdm pyyaml torchvision graphviz && \
pip uninstall -y opencv-python opencv-python-headless || true && \
pip install opencv-python-headless==4.10.0.84

# 5) Copy the Abstract-Rendering repo into the image
WORKDIR /workspace
Expand Down
22 changes: 6 additions & 16 deletions DownStreamModel/gatenet/gatenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,36 +21,26 @@ def __init__(self, config):
self.conv5 = nn.Conv2d(16, 16, kernel_size=3, padding=1, bias=True)
self.bn5 = nn.BatchNorm2d(16, momentum=config['batch_norm_decay'], eps=config['batch_norm_epsilon'])

self.conv6 = nn.Conv2d(16, 16, kernel_size=3, padding=1, bias=True)
self.bn6 = nn.BatchNorm2d(16, momentum=config['batch_norm_decay'], eps=config['batch_norm_epsilon'])

self.flatten = nn.Flatten()

# print(config['input_shape'])
res = self.conv(torch.zeros(config['input_shape'])[None])
# print(res.shape[1])
self.fc = nn.Linear(res.shape[1], int(torch.prod(torch.tensor(config['output_shape']))))

def conv(self, x):
x = F.relu(self.bn1(self.conv1(x))) # 64
x = F.avg_pool2d(x, kernel_size=2)
x = F.relu(self.bn1(self.conv1(x))) # 32
x = F.avg_pool2d(x, kernel_size=2) # → 16

x = F.relu(self.bn2(self.conv2(x))) # 32
x = F.avg_pool2d(x, kernel_size=2)
x = F.relu(self.bn2(self.conv2(x))) # 16
x = F.avg_pool2d(x, kernel_size=2) # → 8

x = F.relu(self.bn3(self.conv3(x))) # 16
x = F.avg_pool2d(x, kernel_size=2)

x = F.relu(self.bn4(self.conv4(x))) # 8
x = F.avg_pool2d(x, kernel_size=2)

x = F.relu(self.bn5(self.conv5(x))) # 4
x = F.avg_pool2d(x, kernel_size=2)


x = F.relu(self.bn6(self.conv6(x))) # No pooling after conv6 # 2

x = self.flatten(x)
x = F.relu(self.bn5(self.conv5(x))) # 8
x = self.flatten(x) # [batch, 16*2*2] = [batch, 64]

return x

Expand Down
172 changes: 141 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,6 @@ Download the repository from GitHub and remove the bundled `auto_LiRPA` folder (
```bash
cd ~
git clone --branch master https://github.com/IllinoisReliableAutonomyGroup/Abstract-Rendering.git
cd ~/Abstract-Rendering
rm -rf auto_LiRPA
```

### 2. Install auto_LiRPA
Expand All @@ -45,33 +43,57 @@ Install the neural network verification library *auto_LiRPA*, and symbolic link
cd ~
git clone --branch master https://github.com/Verified-Intelligence/auto_LiRPA.git
cd ~/Abstract-Rendering
rm -rf auto_LiRPA
ln -s ~/auto_LiRPA/auto_LiRPA auto_LiRPA
```

### 3. Download Scene Data
You may either use your existing Nerfstudio data or download the pre-reconstructed [Nerfstudio scenes](https://drive.google.com/drive/folders/1koY1TL30Bty2x0U6VpszKRgMXk61oTkG?usp=drive_link) and place them in the below dictionary structure.
You may either use your existing Nerfstudio data or download the pre-reconstructed [Nerfstudio scenes](https://drive.google.com/drive/folders/1koY1TL30Bty2x0U6VpszKRgMXk61oTkG?usp=drive_link). First create the output directory:

```bash
~/Abstract-Rendering/nerfstudio/outputs/${case_name}/${reconstruction_method}/${datatime}/...
cd ~/Abstract-Rendering
mkdir -p nerfstudio/outputs
```

Below is visualization of scene *circle*.
![](figures/scene_circle.png)
After downloading, unzip the scene archive from your Downloads folder and move it into place. Set `case_name` to match the scene you downloaded (e.g. `train_data_new`):

### 4. Run via Docker
<<<<<<< jai-dev
```bash
export case_name=train_data_new

This repository also includes a Dockerfile that sets up a GPU-enabled environment with CUDA, PyTorch, Nerfstudio, and the other required Python dependencies pre-installed. Using Docker is optional but can make the environment more reproducible and easier to share with others.
cd ~/Downloads
unzip ${case_name}_*.zip

**Important: Please complete all prior setup steps (1–3) before using Docker in this step.**
mv ${case_name} ~/Abstract-Rendering/nerfstudio/outputs/
```

The final directory structure should look like:

=======
```
nerfstudio/outputs/
└── ${case_name}/
└── ${reconstruction_method}/
└── ${data_time}/
├── config.yml
├── dataparser_transforms.json
└── nerfstudio_models/
└── step-000XXXXXX.ckpt
```

For example, the U-turn scene used in this repository sits at:

```
nerfstudio/outputs/train_data_new/splatfacto/2025-05-09_151825/
```

Below is a visualization of scene *circle*.
![](figures/scene_circle.png)

### 4. Run via Docker

This repository also includes a Dockerfile that sets up a GPU-enabled environment with CUDA, PyTorch, Nerfstudio, and the other required Python dependencies pre-installed. Using Docker is optional but can make the environment more reproducible and easier to share with others.

**Important: Please complete all prior setup steps (1–3) before using Docker in this step.**

>>>>>>> master
- **Prerequisites**: Complete Steps 1–3 above (clone this repo, install and link your local `auto_LiRPA`, and optionally download scene data), have Docker installed on your machine, and install the NVIDIA Container Toolkit if you want to use a GPU from inside the container.
- **Build the image**: From the root of this repository, build a Docker image using the provided Dockerfile, for example under the name `abstract-rendering:latest`:
```bash
Expand All @@ -82,28 +104,19 @@ This repository also includes a Dockerfile that sets up a GPU-enabled environmen
```bash
cd ~/Abstract-Rendering
docker run --gpus all -it --rm \
-p 8080:8080 \
-v "$HOME/Abstract-Rendering":/workspace/Abstract-Rendering \
-v "$HOME/auto_LiRPA":"$HOME/auto_LiRPA" \
-v "$HOME/.cache/docker-abstract":/root/.cache \
abstract-rendering:latest \
/bin/bash
<<<<<<< jai-dev
```
The first `-v` makes your local Abstract-Rendering repository visible at `/workspace/Abstract-Rendering` inside the container. The second `-v` mounts your `~/auto_LiRPA` clone at the same absolute path inside the container so that the `auto_LiRPA` symlink in this repo continues to resolve and the code uses your local auto_LiRPA version.
The first `-v` makes your local Abstract-Rendering repository visible at `/workspace/Abstract-Rendering` inside the container. The second `-v` mounts your `~/auto_LiRPA` clone at the same absolute path inside the container so that the `auto_LiRPA` symlink in this repo continues to resolve and the code uses your local auto_LiRPA version. The third `-v` persists the CUDA kernel cache across container restarts — without it, gsplat recompiles CUDA kernels every time you start a new container (2–3 min overhead).
- **Inside the container**: Once the container starts, run
```bash
cd /workspace/Abstract-Rendering
```
and you can follow the commands in the *Examples* section below exactly as written to run the rendering, abstract rendering, and downstream verification scripts from inside the container.
=======
```
The first `-v` makes your local Abstract-Rendering repository visible at `/workspace/Abstract-Rendering` inside the container. The second `-v` mounts your `~/auto_LiRPA` clone at the same absolute path inside the container so that the `auto_LiRPA` symlink in this repo continues to resolve and the code uses your local auto_LiRPA version.
- **Inside the container**: Once the container starts, run
```bash
cd /workspace/Abstract-Rendering
```
and you can follow the commands in the *Examples* section below exactly as written to run the rendering, abstract rendering, and downstream verification scripts from inside the container.

>>>>>>> master

## Examples

Expand Down Expand Up @@ -172,9 +185,111 @@ The visualization of Gatenet Verification is like:
where green indicates certified regions; red denotes potential
violations; blue indicates gates.

## Scripts
`render_gsplat.py`:
---
### Set-Valued Training

Train GateNet on **abstract (set-valued) images** — per-pixel lower/upper bound images produced by the abstract renderer — for certifiably correct pose estimation across entire pose cells.

#### 1. Run Abstract Gsplat Pose Estimation

Partitions the ODD into cuboid cells, runs abstract rendering, and saves per-cell relative pose bounds (lower/upper w.r.t. the reference point) used for training and certification.

```bash
cd ~/Abstract-Rendering
export case_name=train_data_new
python3 scripts/abstract_gsplat_pose_estimation.py --config configs/${case_name}/config.yaml --odd configs/${case_name}/traj.json
```

Output: `Outputs/AbstractImages/${case_name}/cuboid/`

---

#### 2. Prepare the data

Download pre-computed abstract images from [Google Drive](https://drive.google.com/drive/u/2/folders/1jWmVoXZKHr2ds9ObGoWNHWzfFTKjlJs3) and place under:

```
~/Abstract-Rendering/Outputs/AbstractImages/${case_name}/cuboid/
```

For the Nerfstudio viewer, also download the U-turn dataset and place at:

```
~/Abstract-Rendering/data/uturn/
```

---

#### 3. Configure `train_certify_config.yml`

Set the following fields in `configs/${case_name}/train_certify_config.yml`:

| Parameter | Description |
|---|---|
| `abstract_folder` | Path to cuboid `.pt` abstract image files |
| `concrete_image_root` | Path to concrete rendered images |
| `checkpoint_dir` | Where trained GateNet weights are saved |
| `image_width` / `image_height` | Must match abstract images (default `32×32`) |
| `num_epochs` | Training epochs (default `65`) |
| `batch_size_concrete` / `batch_size_abstract` | Reduce if GPU OOM |
| `lambda_concrete` / `lambda_abstract` | Loss weights — must sum to 1.0 |
| `tolerance` | Allowed pose estimation error (default `0.25`) |
| `learning_rate` | Adam learning rate (default `0.0005`) |
| `weight_decay` | L2 regularisation (default `0.00001`) |
| `bound_method` | CROWN method — `"backward"` recommended |
| `save_every` | Save checkpoint every N epochs |

#### 4. Train

```bash
cd ~/Abstract-Rendering
export case_name=train_data_new
python3 scripts/gatenet_train_certify.py --config configs/${case_name}/train_certify_config.yml
```

Note the `run_datetime` printed at the start — needed for certification.

---

### Test GateNet on Abstract Images (CROWN Certification)

Set `run_datetime` and `tolerance` in `configs/${case_name}/train_certify_config.yml`, then run:

```bash
cd ~/Abstract-Rendering
export case_name=train_data_new
python3 scripts/test_gatenet_abstract.py \
--config configs/${case_name}/train_certify_config.yml \
--traj configs/${case_name}/traj.yaml
```

---

### Visualize Certification in the Nerfstudio Viewer

```bash
cd ~/Abstract-Rendering
export case_name=train_data_new
python3 scripts/visualize_abstract_viser.py \
--config configs/${case_name}/train_certify_config.yml \
--option ns \
--data data/uturn
```

Open `http://localhost:8080` in your browser. **Green** = certified, **red** = violated.

![Viser Visualization](figures/vis_plane.png)


**Useful flags:**

| Flag | Effect |
|---|---|
| `--opacity 0.2` | Make cuboids more transparent (default `0.35`) |
| `--no-cuboids` | Show the scene only, skip CROWN and cuboid overlay |
| `--port 8081` | Change the viewer port if 8080 is already in use |

## Scripts
`render_gsplat.py`:
- Concrete renderer: given a trained Nerfstudio 3D Gaussian scene and a list of poses, it produces standard RGB images along the trajectory.
- Reads `configs/${case_name}/config.yaml` for parameters set by the user and `configs/${case_name}/traj.json` for the pose information.
Expand Down Expand Up @@ -219,11 +334,6 @@ violations; blue indicates gates.
- Optional downstream configs such as `gatenet.yml` and `vis_absimg.yaml`.
- When creating a new case, you should create a new folder under `configs/` (for example `configs/my_case/`) and add a new `config.yaml` and trajectory files there, rather than modifying the existing case folders.

- Implements the volume‑rendering step for Gaussian splats.
- For each gaussian, combines opacity and color contributions for each pixel ray using a cumulative product, and extends the same logic to lower/upper bounds in the abstract setting.



## Citation

If you use this repository or the Abstract-Rendering toolkit in your work, please consider citing our NeurIPS 2025 splotlight poster:
Expand Down
32 changes: 32 additions & 0 deletions configs/boeing787_nerfstudio/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
bound_method: "forward"
render_method: "splatfacto"
case_name: "boeing787_nerfstudio"
odd_type: "cuboid"
save_filename: null
debug: false

width: 640
height: 640
fx: 705
fy: 705
eps2d: 15

downsampling_ratio: 10
tile_size_abstract: 8
tile_size_render: 24
min_distance: 0.01
max_distance: 100.0
gs_batch: 70
part: [2,5,5]

data_time: "2026-01-31_235019"
checkpoint_filename: "step-000299999_pruned_90.ckpt"

bg_img_path: null
bg_pure_color: [0.0117, 0.208, 0.988]

save_ref: true
save_bound: true
N_samples: 5


13 changes: 13 additions & 0 deletions configs/boeing787_nerfstudio/gatenet.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
case_name: "boeing787_nerfstudio"
nn_type: "gatenet"
bound_method: "backward"
render_method: "splatfacto"
odd_type: "cuboid"
run_datetime: "20260223_221554"

width: 64
height: 64

debug: True
threshold: [850.0, 530.0, 570.0]
show_details: False
Loading