引入 claude code 和 初始化 codex

This commit is contained in:
SepComet 2026-04-08 17:00:24 +08:00
parent dce5598c70
commit 0a78c0bd94
14 changed files with 1455 additions and 105 deletions

View File

@ -0,0 +1,152 @@
---
name: "OPSX: Apply"
description: Implement tasks from an OpenSpec change (Experimental)
category: Workflow
tags: [workflow, artifacts, experimental]
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name (e.g., `/opsx:apply add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using `/opsx:continue`
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]``- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! You can archive this change with `/opsx:archive`.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@ -0,0 +1,157 @@
---
name: "OPSX: Archive"
description: Archive a completed change in the experimental workflow
category: Workflow
tags: [workflow, archive, experimental]
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name after `/opsx:archive` (e.g., `/opsx:archive add-auth`). If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Prompt user for confirmation to continue
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Prompt user for confirmation to continue
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Spec sync status (synced / sync skipped / no delta specs)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs
All artifacts complete. All tasks complete.
```
**Output On Success (No Delta Specs)**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** No delta specs
All artifacts complete. All tasks complete.
```
**Output On Success With Warnings**
```
## Archive Complete (with warnings)
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** Sync skipped (user chose to skip)
**Warnings:**
- Archived with 2 incomplete artifacts
- Archived with 3 incomplete tasks
- Delta spec sync was skipped (user chose to skip)
Review the archive if this was not intentional.
```
**Output On Error (Archive Exists)**
```
## Archive Failed
**Change:** <change-name>
**Target:** openspec/changes/archive/YYYY-MM-DD-<name>/
Target archive directory already exists.
**Options:**
1. Rename the existing archive
2. Delete the existing archive if it's a duplicate
3. Wait until a different date to archive
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use the Skill tool to invoke `openspec-sync-specs` (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@ -0,0 +1,173 @@
---
name: "OPSX: Explore"
description: "Enter explore mode - think through ideas, investigate problems, clarify requirements"
category: Workflow
tags: [workflow, explore, experimental, thinking]
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
**Input**: The argument after `/opsx:explore` is whatever the user wants to think about. Could be:
- A vague idea: "real-time collaboration"
- A specific problem: "the auth system is getting unwieldy"
- A change name: "add-dark-mode" (to explore in context of that change)
- A comparison: "postgres vs sqlite for this"
- Nothing (just enter explore mode)
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
If the user mentioned a specific change name, read its artifacts for context.
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When things crystallize, you might offer a summary - but it's optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@ -0,0 +1,106 @@
---
name: "OPSX: Propose"
description: Propose a new change - create it and generate all artifacts in one step
category: Workflow
tags: [workflow, artifacts, experimental]
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The argument after `/opsx:propose` is the change name (kebab-case), OR a description of what the user wants to build.
**Steps**
1. **If no input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` to start implementing."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

View File

@ -0,0 +1,156 @@
---
name: openspec-apply-change
description: Implement tasks from an OpenSpec change. Use when the user wants to start implementing, continue implementation, or work through tasks.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Implement tasks from an OpenSpec change.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **Select the change**
If a name is provided, use it. Otherwise:
- Infer from conversation context if the user mentioned a change
- Auto-select if only one active change exists
- If ambiguous, run `openspec list --json` to get available changes and use the **AskUserQuestion tool** to let the user select
Always announce: "Using change: <name>" and how to override (e.g., `/opsx:apply <other>`).
2. **Check status to understand the schema**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to understand:
- `schemaName`: The workflow being used (e.g., "spec-driven")
- Which artifact contains the tasks (typically "tasks" for spec-driven, check status for others)
3. **Get apply instructions**
```bash
openspec instructions apply --change "<name>" --json
```
This returns:
- Context file paths (varies by schema - could be proposal/specs/design/tasks or spec/tests/implementation/docs)
- Progress (total, complete, remaining)
- Task list with status
- Dynamic instruction based on current state
**Handle states:**
- If `state: "blocked"` (missing artifacts): show message, suggest using openspec-continue-change
- If `state: "all_done"`: congratulate, suggest archive
- Otherwise: proceed to implementation
4. **Read context files**
Read the files listed in `contextFiles` from the apply instructions output.
The files depend on the schema being used:
- **spec-driven**: proposal, specs, design, tasks
- Other schemas: follow the contextFiles from CLI output
5. **Show current progress**
Display:
- Schema being used
- Progress: "N/M tasks complete"
- Remaining tasks overview
- Dynamic instruction from CLI
6. **Implement tasks (loop until done or blocked)**
For each pending task:
- Show which task is being worked on
- Make the code changes required
- Keep changes minimal and focused
- Mark task complete in the tasks file: `- [ ]``- [x]`
- Continue to next task
**Pause if:**
- Task is unclear → ask for clarification
- Implementation reveals a design issue → suggest updating artifacts
- Error or blocker encountered → report and wait for guidance
- User interrupts
7. **On completion or pause, show status**
Display:
- Tasks completed this session
- Overall progress: "N/M tasks complete"
- If all done: suggest archive
- If paused: explain why and wait for guidance
**Output During Implementation**
```
## Implementing: <change-name> (schema: <schema-name>)
Working on task 3/7: <task description>
[...implementation happening...]
✓ Task complete
Working on task 4/7: <task description>
[...implementation happening...]
✓ Task complete
```
**Output On Completion**
```
## Implementation Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 7/7 tasks complete ✓
### Completed This Session
- [x] Task 1
- [x] Task 2
...
All tasks complete! Ready to archive this change.
```
**Output On Pause (Issue Encountered)**
```
## Implementation Paused
**Change:** <change-name>
**Schema:** <schema-name>
**Progress:** 4/7 tasks complete
### Issue Encountered
<description of the issue>
**Options:**
1. <option 1>
2. <option 2>
3. Other approach
What would you like to do?
```
**Guardrails**
- Keep going through tasks until done or blocked
- Always read context files before starting (from the apply instructions output)
- If task is ambiguous, pause and ask before implementing
- If implementation reveals issues, pause and suggest artifact updates
- Keep code changes minimal and scoped to each task
- Update task checkbox immediately after completing each task
- Pause on errors, blockers, or unclear requirements - don't guess
- Use contextFiles from CLI output, don't assume specific file names
**Fluid Workflow Integration**
This skill supports the "actions on a change" model:
- **Can be invoked anytime**: Before all artifacts are done (if tasks exist), after partial implementation, interleaved with other actions
- **Allows artifact updates**: If implementation reveals design issues, suggest updating artifacts - not phase-locked, work fluidly

View File

@ -0,0 +1,114 @@
---
name: openspec-archive-change
description: Archive a completed change in the experimental workflow. Use when the user wants to finalize and archive a change after implementation is complete.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Archive a completed change in the experimental workflow.
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
**Steps**
1. **If no change name provided, prompt for selection**
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
Show only active changes (not already archived).
Include the schema used for each change if available.
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
2. **Check artifact completion status**
Run `openspec status --change "<name>" --json` to check artifact completion.
Parse the JSON to understand:
- `schemaName`: The workflow being used
- `artifacts`: List of artifacts with their status (`done` or other)
**If any artifacts are not `done`:**
- Display warning listing incomplete artifacts
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
3. **Check task completion status**
Read the tasks file (typically `tasks.md`) to check for incomplete tasks.
Count tasks marked with `- [ ]` (incomplete) vs `- [x]` (complete).
**If incomplete tasks found:**
- Display warning showing count of incomplete tasks
- Use **AskUserQuestion tool** to confirm user wants to proceed
- Proceed if user confirms
**If no tasks file exists:** Proceed without task-related warning.
4. **Assess delta spec sync state**
Check for delta specs at `openspec/changes/<name>/specs/`. If none exist, proceed without sync prompt.
**If delta specs exist:**
- Compare each delta spec with its corresponding main spec at `openspec/specs/<capability>/spec.md`
- Determine what changes would be applied (adds, modifications, removals, renames)
- Show a combined summary before prompting
**Prompt options:**
- If changes needed: "Sync now (recommended)", "Archive without syncing"
- If already synced: "Archive now", "Sync anyway", "Cancel"
If user chooses sync, use Task tool (subagent_type: "general-purpose", prompt: "Use Skill tool to invoke openspec-sync-specs for change '<name>'. Delta spec analysis: <include the analyzed delta spec summary>"). Proceed to archive regardless of choice.
5. **Perform the archive**
Create the archive directory if it doesn't exist:
```bash
mkdir -p openspec/changes/archive
```
Generate target name using current date: `YYYY-MM-DD-<change-name>`
**Check if target already exists:**
- If yes: Fail with error, suggest renaming existing archive or using different date
- If no: Move the change directory to archive
```bash
mv openspec/changes/<name> openspec/changes/archive/YYYY-MM-DD-<name>
```
6. **Display summary**
Show archive completion summary including:
- Change name
- Schema that was used
- Archive location
- Whether specs were synced (if applicable)
- Note about any warnings (incomplete artifacts/tasks)
**Output On Success**
```
## Archive Complete
**Change:** <change-name>
**Schema:** <schema-name>
**Archived to:** openspec/changes/archive/YYYY-MM-DD-<name>/
**Specs:** ✓ Synced to main specs (or "No delta specs" or "Sync skipped")
All artifacts complete. All tasks complete.
```
**Guardrails**
- Always prompt for change selection if not provided
- Use artifact graph (openspec status --json) for completion checking
- Don't block archive on warnings - just inform and confirm
- Preserve .openspec.yaml when moving to archive (it moves with the directory)
- Show clear summary of what happened
- If sync is requested, use openspec-sync-specs approach (agent-driven)
- If delta specs exist, always run the sync assessment and show the combined summary before prompting

View File

@ -0,0 +1,288 @@
---
name: openspec-explore
description: Enter explore mode - a thinking partner for exploring ideas, investigating problems, and clarifying requirements. Use when the user wants to think through something before or during a change.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Enter explore mode. Think deeply. Visualize freely. Follow the conversation wherever it goes.
**IMPORTANT: Explore mode is for thinking, not implementing.** You may read files, search code, and investigate the codebase, but you must NEVER write code or implement features. If the user asks you to implement something, remind them to exit explore mode first and create a change proposal. You MAY create OpenSpec artifacts (proposals, designs, specs) if the user asks—that's capturing thinking, not implementing.
**This is a stance, not a workflow.** There are no fixed steps, no required sequence, no mandatory outputs. You're a thinking partner helping the user explore.
---
## The Stance
- **Curious, not prescriptive** - Ask questions that emerge naturally, don't follow a script
- **Open threads, not interrogations** - Surface multiple interesting directions and let the user follow what resonates. Don't funnel them through a single path of questions.
- **Visual** - Use ASCII diagrams liberally when they'd help clarify thinking
- **Adaptive** - Follow interesting threads, pivot when new information emerges
- **Patient** - Don't rush to conclusions, let the shape of the problem emerge
- **Grounded** - Explore the actual codebase when relevant, don't just theorize
---
## What You Might Do
Depending on what the user brings, you might:
**Explore the problem space**
- Ask clarifying questions that emerge from what they said
- Challenge assumptions
- Reframe the problem
- Find analogies
**Investigate the codebase**
- Map existing architecture relevant to the discussion
- Find integration points
- Identify patterns already in use
- Surface hidden complexity
**Compare options**
- Brainstorm multiple approaches
- Build comparison tables
- Sketch tradeoffs
- Recommend a path (if asked)
**Visualize**
```
┌─────────────────────────────────────────┐
│ Use ASCII diagrams liberally │
├─────────────────────────────────────────┤
│ │
│ ┌────────┐ ┌────────┐ │
│ │ State │────────▶│ State │ │
│ │ A │ │ B │ │
│ └────────┘ └────────┘ │
│ │
│ System diagrams, state machines, │
│ data flows, architecture sketches, │
│ dependency graphs, comparison tables │
│ │
└─────────────────────────────────────────┘
```
**Surface risks and unknowns**
- Identify what could go wrong
- Find gaps in understanding
- Suggest spikes or investigations
---
## OpenSpec Awareness
You have full context of the OpenSpec system. Use it naturally, don't force it.
### Check for context
At the start, quickly check what exists:
```bash
openspec list --json
```
This tells you:
- If there are active changes
- Their names, schemas, and status
- What the user might be working on
### When no change exists
Think freely. When insights crystallize, you might offer:
- "This feels solid enough to start a change. Want me to create a proposal?"
- Or keep exploring - no pressure to formalize
### When a change exists
If the user mentions a change or you detect one is relevant:
1. **Read existing artifacts for context**
- `openspec/changes/<name>/proposal.md`
- `openspec/changes/<name>/design.md`
- `openspec/changes/<name>/tasks.md`
- etc.
2. **Reference them naturally in conversation**
- "Your design mentions using Redis, but we just realized SQLite fits better..."
- "The proposal scopes this to premium users, but we're now thinking everyone..."
3. **Offer to capture when decisions are made**
| Insight Type | Where to Capture |
|--------------|------------------|
| New requirement discovered | `specs/<capability>/spec.md` |
| Requirement changed | `specs/<capability>/spec.md` |
| Design decision made | `design.md` |
| Scope changed | `proposal.md` |
| New work identified | `tasks.md` |
| Assumption invalidated | Relevant artifact |
Example offers:
- "That's a design decision. Capture it in design.md?"
- "This is a new requirement. Add it to specs?"
- "This changes scope. Update the proposal?"
4. **The user decides** - Offer and move on. Don't pressure. Don't auto-capture.
---
## What You Don't Have To Do
- Follow a script
- Ask the same questions every time
- Produce a specific artifact
- Reach a conclusion
- Stay on topic if a tangent is valuable
- Be brief (this is thinking time)
---
## Handling Different Entry Points
**User brings a vague idea:**
```
User: I'm thinking about adding real-time collaboration
You: Real-time collab is a big space. Let me think about this...
COLLABORATION SPECTRUM
════════════════════════════════════════════
Awareness Coordination Sync
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Presence│ │Cursors │ │ CRDT │
│ "3 │ │ Multi │ │Conflict│
│online" │ │ select │ │ free │
└────────┘ └────────┘ └────────┘
│ │ │
trivial moderate complex
Where's your head at?
```
**User brings a specific problem:**
```
User: The auth system is a mess
You: [reads codebase]
┌─────────────────────────────────────────────┐
│ CURRENT AUTH FLOW │
└─────────────────────────────────────────────┘
┌─────────────┼─────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Google │ │ GitHub │ │ Email │
│ OAuth │ │ OAuth │ │ Magic │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└─────────────┼─────────────┘
┌───────────┐
│ Session │
└─────┬─────┘
┌───────────┐
│ Perms │
└───────────┘
I see three tangles. Which one's burning?
```
**User is stuck mid-implementation:**
```
User: /opsx:explore add-auth-system
The OAuth integration is more complex than expected
You: [reads change artifacts]
You're on task 4: "Implement OAuth flow"
Let me trace what's involved...
[draws diagram, explores options, suggests paths]
Want to update the design to reflect this?
Or add a spike task to investigate?
```
**User wants to compare options:**
```
User: Should we use Postgres or SQLite?
You: Generic answer is boring. What's the context?
User: A CLI tool that tracks local dev environments
You: That changes everything.
┌─────────────────────────────────────────────────┐
│ CLI TOOL DATA STORAGE │
└─────────────────────────────────────────────────┘
Key constraints:
• No daemon running
• Must work offline
• Single user
SQLite Postgres
Deployment embedded ✓ needs server ✗
Offline yes ✓ no ✗
Single file yes ✓ no ✗
SQLite. Not even close.
Unless... is there a sync component?
```
---
## Ending Discovery
There's no required ending. Discovery might:
- **Flow into a proposal**: "Ready to start? I can create a change proposal."
- **Result in artifact updates**: "Updated design.md with these decisions"
- **Just provide clarity**: User has what they need, moves on
- **Continue later**: "We can pick this up anytime"
When it feels like things are crystallizing, you might summarize:
```
## What We Figured Out
**The problem**: [crystallized understanding]
**The approach**: [if one emerged]
**Open questions**: [if any remain]
**Next steps** (if ready):
- Create a change proposal
- Keep exploring: just keep talking
```
But this summary is optional. Sometimes the thinking IS the value.
---
## Guardrails
- **Don't implement** - Never write code or implement features. Creating OpenSpec artifacts is fine, writing application code is not.
- **Don't fake understanding** - If something is unclear, dig deeper
- **Don't rush** - Discovery is thinking time, not task time
- **Don't force structure** - Let patterns emerge naturally
- **Don't auto-capture** - Offer to save insights, don't just do it
- **Do visualize** - A good diagram is worth many paragraphs
- **Do explore the codebase** - Ground discussions in reality
- **Do question assumptions** - Including the user's and your own

View File

@ -0,0 +1,110 @@
---
name: openspec-propose
description: Propose a new change with all artifacts generated in one step. Use when the user wants to quickly describe what they want to build and get a complete proposal with design, specs, and tasks ready for implementation.
license: MIT
compatibility: Requires openspec CLI.
metadata:
author: openspec
version: "1.0"
generatedBy: "1.2.0"
---
Propose a new change - create the change and generate all artifacts in one step.
I'll create a change with artifacts:
- proposal.md (what & why)
- design.md (how)
- tasks.md (implementation steps)
When ready to implement, run /opsx:apply
---
**Input**: The user's request should include a change name (kebab-case) OR a description of what they want to build.
**Steps**
1. **If no clear input provided, ask what they want to build**
Use the **AskUserQuestion tool** (open-ended, no preset options) to ask:
> "What change do you want to work on? Describe what you want to build or fix."
From their description, derive a kebab-case name (e.g., "add user authentication" → `add-user-auth`).
**IMPORTANT**: Do NOT proceed without understanding what the user wants to build.
2. **Create the change directory**
```bash
openspec new change "<name>"
```
This creates a scaffolded change at `openspec/changes/<name>/` with `.openspec.yaml`.
3. **Get the artifact build order**
```bash
openspec status --change "<name>" --json
```
Parse the JSON to get:
- `applyRequires`: array of artifact IDs needed before implementation (e.g., `["tasks"]`)
- `artifacts`: list of all artifacts with their status and dependencies
4. **Create artifacts in sequence until apply-ready**
Use the **TodoWrite tool** to track progress through the artifacts.
Loop through artifacts in dependency order (artifacts with no pending dependencies first):
a. **For each artifact that is `ready` (dependencies satisfied)**:
- Get instructions:
```bash
openspec instructions <artifact-id> --change "<name>" --json
```
- The instructions JSON includes:
- `context`: Project background (constraints for you - do NOT include in output)
- `rules`: Artifact-specific rules (constraints for you - do NOT include in output)
- `template`: The structure to use for your output file
- `instruction`: Schema-specific guidance for this artifact type
- `outputPath`: Where to write the artifact
- `dependencies`: Completed artifacts to read for context
- Read any completed dependency files for context
- Create the artifact file using `template` as the structure
- Apply `context` and `rules` as constraints - but do NOT copy them into the file
- Show brief progress: "Created <artifact-id>"
b. **Continue until all `applyRequires` artifacts are complete**
- After creating each artifact, re-run `openspec status --change "<name>" --json`
- Check if every artifact ID in `applyRequires` has `status: "done"` in the artifacts array
- Stop when all `applyRequires` artifacts are done
c. **If an artifact requires user input** (unclear context):
- Use **AskUserQuestion tool** to clarify
- Then continue with creation
5. **Show final status**
```bash
openspec status --change "<name>"
```
**Output**
After completing all artifacts, summarize:
- Change name and location
- List of artifacts created with brief descriptions
- What's ready: "All artifacts created! Ready for implementation."
- Prompt: "Run `/opsx:apply` or ask me to implement to start working on the tasks."
**Artifact Creation Guidelines**
- Follow the `instruction` field from `openspec instructions` for each artifact type
- The schema defines what each artifact should contain - follow it
- Read dependency artifacts for context before creating new ones
- Use `template` as the structure for your output file - fill in its sections
- **IMPORTANT**: `context` and `rules` are constraints for YOU, not content for the file
- Do NOT copy `<context>`, `<rules>`, `<project_context>` blocks into the artifact
- These guide what you write, but should never appear in the output
**Guardrails**
- Create ALL artifacts needed for implementation (as defined by schema's `apply.requires`)
- Always read dependency artifacts before creating a new one
- If context is critically unclear, ask the user - but prefer making reasonable decisions to keep momentum
- If a change with that name already exists, ask if user wants to continue it or create a new one
- Verify each artifact file exists after writing before proceeding to next

6
.vsconfig Normal file
View File

@ -0,0 +1,6 @@
{
"version": "1.0",
"components": [
"Microsoft.VisualStudio.Workload.ManagedGame"
]
}

View File

@ -1,36 +1,23 @@
# Repository Guidelines # Repository Guidelines
## Project Structure & Module Organization ## Project Structure & Module Organization
This repository is a Unity `2022.3 LTS` project. Core gameplay code lives in `Assets/GameMain/Scripts` (for example UI, entity, procedure, events, and data-table related logic). Shared runtime/framework code is in `Assets/GameFramework`. Third-party plugins are under `Assets/Plugins` (such as DOTween). Main scenes are in `Assets/GameMain/Scenes` (`Menu.unity`, `Main.unity`, `Game.unity`), and the local launcher flow starts from `Assets/Launcher.unity`. `Assets/GameMain/Scripts` holds gameplay code, organized by domain such as `Components`, `Entity`, `Procedure`, `Simulation`, `UI`, and `Editor`. Scenes live in `Assets/GameMain/Scenes`, and `Assets/Launcher.unity` is the normal local entry point. Shared framework code is under `Assets/GameFramework`; third-party code and packages live in `Assets/Plugins` and `Packages/`. Automated tests are in `Assets/Tests/Simulation/{EditMode,PlayMode}`. Design notes and change proposals are tracked in `docs/` and `openspec/`. GameFramework build and resource configs live in `Assets/GameMain/Configs`.
Data content and tooling are split across `Assets/GameMain/DataTables`, the localized data-table folder at project root, and `Tools/`.
Do not commit generated folders: `Library/`, `Temp/`, `Logs/`, `obj/`.
## Build, Test, and Development Commands ## Build, Test, and Development Commands
- `Unity -projectPath .` - `Unity.exe -projectPath .` opens the project in Unity `2022.3.62f3c1`.
Opens the project in Unity Editor. - Open `VampireLike.sln` in Rider or Visual Studio for C# editing.
- `VampireLike.sln` - Open `Assets/Launcher.unity` and press Play to run the main loop locally.
Open the C# solution in Rider/Visual Studio for code changes. - `Unity.exe -batchmode -nographics -projectPath . -runTests -testPlatform EditMode -testResults Logs/editmode-test-results.xml -logFile Logs/editmode-tests.log -quit` runs EditMode tests.
- Unity Editor: open `Assets/Launcher.unity` and press Play - `Unity.exe -batchmode -nographics -projectPath . -runTests -testPlatform PlayMode -testResults Logs/playmode-test-results.xml -logFile Logs/playmode-tests.log -quit` runs PlayMode tests.
Runs the local game flow end-to-end.
- Unity Editor: `Window > General > Test Runner`
Run EditMode/PlayMode tests interactively.
- `Unity -batchmode -projectPath . -runTests -testPlatform editmode -quit -logFile Logs/editmode-tests.log`
Executes EditMode tests via CLI and writes logs for CI/local verification.
## Coding Style & Naming Conventions ## Coding Style & Naming Conventions
Use C# with 4-space indentation and K&R braces. Keep one main type per file, and match file name to type name. Use `PascalCase` for public types/members and `camelCase` for locals/parameters. For Inspector exposure, prefer `[SerializeField] private` fields instead of `public` fields. Use C# with 4-space indentation and K&R braces. Keep one main type per file, and match file names to type names. Use `PascalCase` for public types and members, `camelCase` for locals and parameters, and `[SerializeField] private` for Inspector-facing fields. Match nearby namespaces and folder boundaries (`Simulation`, `Entity`, `UI`, etc.). Read and edit text files using `UTF-8`, and use `LF` line endings. No repo-local linter or `.editorconfig` is committed, so use Rider/VS formatting and follow surrounding code. When adding or moving Unity assets, always commit the paired `.meta` files.
Always keep `.meta` files when adding or moving Unity assets to preserve GUID references.
## Testing Guidelines ## Testing Guidelines
`com.unity.test-framework` is included (NUnit style). Place tests in `Assets/Tests/` or module-local `Tests/` folders. Name test files `*Tests.cs`. Use EditMode tests for pure logic and PlayMode tests for runtime/scene behavior. Update or add tests for gameplay logic and UI controller/use-case changes. Tests use Unity Test Framework with NUnit. Name files `*Tests.cs`. Put pure logic coverage in EditMode tests and runtime or scene behavior in PlayMode tests. There is no fixed coverage percentage, but changes to simulation, combat, procedures, or UI flow should ship with new or updated tests in `Assets/Tests`.
## Commit & Pull Request Guidelines ## Commit & Pull Request Guidelines
Recent history favors concise, descriptive commit messages (often Chinese), e.g. `Feature: add launcher scene and update project settings`. Keep commits focused and include module context (`UI`, `Procedure`, `Entity`) when useful. Recent history favors short, imperative commit subjects such as `fix test sample`, `Update NearestTargetSelector.cs`, and `Cleanup 1`. Keep commits focused and add module context when helpful, for example `Simulation: tighten collision query pruning`. PRs should include a behavior summary, linked task or issue, tests run, and screenshots or video for UI or scene changes.
PRs should include: change summary, affected scenes/modules, test evidence (Test Runner or CLI logs), linked issue/task, and screenshots or short video for UI/visual updates. ## Repository Hygiene
Do not commit generated Unity output such as `Library/`, `Temp/`, `Logs/`, `obj/`, generated `.csproj`, or `.sln` files. Review changes to `Assets/GameMain/Configs/*.xml` and `Assets/StreamingAssets` carefully because they affect packaging and runtime resources.
## Encoding
Use UTF8 with BOM

119
CLAUDE.md Normal file
View File

@ -0,0 +1,119 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
**VampireLike** is a Unity 2022.3 LTS action survival game project using a `GameMain + GameFramework` layered architecture. The project uses C# with 4-space indentation, K&R braces, and UTF-8 with BOM encoding.
## Build & Development Commands
- Open in Unity Editor: `Unity -projectPath .`
- Open solution in IDE: `VampireLike.sln`
- Run from Launcher scene: Open `Assets/Launcher.unity` in Editor and press Play
- Run tests via Unity Editor: `Window > General > Test Runner`
- Run EditMode tests via CLI: `Unity -batchmode -projectPath . -runTests -testPlatform editmode -quit -logFile Logs/editmode-tests.log`
## Architecture
### Code Organization
- `Assets/GameMain/Scripts/` - Game business code (UI, Entity, Procedure, Events, DataTable)
- `Assets/GameFramework/` - Shared framework/runtime and editor extensions
- `Assets/Plugins/` - Third-party plugins (e.g., DOTween)
- `Assets/Launcher.unity` - Entry point scene
### GameFramework (Base Framework)
The framework provides modular components accessible via `GameEntry.<ComponentName>`:
- `GameEntry.Event` - Event system (subscribe/unsubscribe pattern)
- `GameEntry.Entity` - Entity management (show/hide/instantiate pooled objects)
- `GameEntry.UI` - UI form management
- `GameEntry.DataTable` - Data table loading
- `GameEntry.Procedure` - Finite state machine procedures
- `GameEntry.Resource` - Resource loading
- Other components: Config, Scene, Sound, Setting, ObjectPool, Fsm, etc.
GameEntry is partially generated. Custom components are initialized in `GameEntry.Custom.cs`.
### UI Architecture (Controller/UseCase/View Pattern)
UI forms follow a layered pattern in `Assets/GameMain/Scripts/UI/`:
- **Controller** - Inherits `UIFormControllerCommonBase<Context, Form>`. Handles input events, subscribes to game events, manages UI lifecycle
- **UseCase** - Business logic layer (e.g., `ShopFormUseCase`). Pure logic without Unity dependencies
- **View** - MonoBehaviour UGuiForm. Handles presentation only
Context objects (e.g., `ShopFormContext`) carry display data to the View layer.
### Entity System
Entities are game objects managed by `GameEntry.Entity`:
- `Assets/GameMain/Scripts/Entity/EntityLogic/` - Entity classes (Player, Enemy, Weapon, Coin, Exp, Effect)
- `Assets/GameMain/Scripts/Entity/EntityData/` - Entity data objects (spawn configuration)
- Weapon attack effects implement `IWeaponAttackEffect` interface
- Target selectors implement `ITargetSelector` (NearestTargetSelector, LowestHealthTargetSelector, HighestHealthTargetSelector)
### Simulation System (ECS-like)
`Assets/GameMain/Scripts/Simulation/SimulationWorld.cs` provides a data-oriented simulation layer:
- Separates simulation data (`*SimData` structs) from presentation (MonoBehaviour entities)
- Uses `SimulationWorld.Tick()` to advance simulation state
- Data flows through: JobDataChannel -> Jobs (enemy/projectile/collision logic) -> OutputCommit -> Presentation sync
- Enemy separation uses spatial indexing and temporal avoidance
- Tests in `Assets/Tests/Simulation/` validate simulation behavior
### Component Pattern
Gameplay components in `Assets/GameMain/Scripts/Components/`:
- `MovementComponent` - Handles movement with direction/speed
- `AttackComponent` - Attack logic
- `HealthComponent` - HP management
- `StatComponent` - Stats with modifiers (additive/multiplicative)
- `AbsorbComponent` - Pickup absorption
- `InputComponent` - Player input
- `BackpackComponent` - Inventory
Components use `OnInit()` initialization and `OnUpdate()` per-frame logic.
### Procedure System
Procedures in `Assets/GameMain/Scripts/Procedure/` manage game flow state machine:
- `ProcedureStartMenu` - Menu scene
- `ProcedureStressTest` - Stress test mode
- Procedures transition via `GameEntry.Procedure`
### Data Tables
Data tables in `Assets/GameMain/Scripts/DataTable/` (DR prefix = Data Row):
- DRBullet, DREnemy, DREntity, DRGoods, DRLevel, DRLevelRarity, DRLevelUpReward, DRMusic, DRProp, DRRole, DRScene, DRSound, DRUIForm, DRUISound, DRWeapon
- Loaded via `GameEntry.DataTable`
### Events
Custom events in `Assets/GameMain/Scripts/CustomEvent/` follow `*EventArgs` naming. Subscribe via `GameEntry.Event.Subscribe(EventArgs.EventId, handler)`.
### Enum Definitions
Enums in `Assets/GameMain/Scripts/Definition/Enum/`: BulletType, CampType, EnemyType, GoodsType, ItemRarity, RelationType, StatType, UIFormType, WeaponType
## Coding Conventions
- C# with 4-space indentation, K&R braces
- One main type per file, file name matches type name
- `PascalCase` for public types/members, `camelCase` for locals/parameters
- `[SerializeField] private` for Inspector exposure
- Always preserve `.meta` files for Unity asset GUID references
## Testing
Tests use `com.unity.test-framework` (NUnit). Place tests in:
- `Assets/Tests/` - General tests
- `Assets/Tests/Simulation/EditMode/` - EditMode simulation tests
- `Assets/Tests/Simulation/PlayMode/` - PlayMode simulation tests
- Name test files `*Tests.cs`
Run via Unity Test Runner (`Window > General > Test Runner`) or CLI.
## Ignore (Do Not Commit)
`Library/`, `Temp/`, `Logs/`, `obj/`, `UserSettings/`

144
README.md
View File

@ -1,100 +1,82 @@
# VampireLike # VampireLike
`VampireLike` 是一个基于 **Unity 2022.3 LTS** 的 2D/3D按当前资源配置动作生存类项目使用 `GameMain + GameFramework` 分层组织代码与资源。 ## 这是一个学习项目
## 开发环境 VampireLike 是一个基于 **Unity 2022.3 LTS** 的动作生存类项目,旨在实践以下技术要点:
- Unity Editor: `2022.3.62f3c1` - **GameFramework 框架的组件化设计** — 理解如何通过 `GameEntry.<Component>` 解耦各模块(事件 Entity/UI/Procedure/Resource 等)
- .NET/C#: Unity 默认编译链 - **UI 架构的分层设计** — 实践 Controller / UseCase / View 三层分离,理解"context"作为数据桥梁的作用
- 推荐 IDE: - **数据驱动设计** — 通过 DataTableDRBullet、DREnemy、DRWeapon 等)实现配置与代码的分离
- Rider - **数据导向Data-Oriented模拟系统**`SimulationWorld` 展示了如何将 ECS 思路应用于 UnitySimData 结构体 → Job 并行处理 → 结果提交 → 表现层同步
- Visual Studio - **Entity-Component 模式** — 实体Player/Enemy/Weapon与数据EntityData分离配合对象池实现高效Spawn/Despawn
- VS Code - **技能系统设计** — StatModifier加减/乘算的组合、AttackComponent 的技能释放流程
- **Target Selector 模式** — 武器通过 `ITargetSelector` 接口支持多种目标选择策略(最近/最低血量/最高血量)
## 快速开始 ## 项目结构
1. 使用 Unity Hub 打开项目根目录。 ```
2. 确认 Unity 版本为 `2022.3.62f3c1`(或兼容的 2022.3 LTS 版本)。 Assets/
3. 进入后等待包与资源导入完成。 ├── GameFramework/ # 通用框架(来自 GameFramework.cn
4. 在 Editor 中打开场景并运行: │ └── Scripts/Runtime/ # Event/Entity/UI/Procedure/Resource 等组件
- `Assets/GameMain/Scenes/Menu.unity` ├── GameMain/ # 游戏业务代码
- `Assets/GameMain/Scenes/Main.unity` │ ├── Scripts/
- `Assets/GameMain/Scenes/Game.unity` │ │ ├── Base/ # GameEntry 入口
│ │ ├── Components/ # 游戏组件Movement/Attack/Health/Stat...
│ │ ├── CustomEvent/ # 自定义事件
│ │ ├── DataTable/ # DR* 数据行定义
│ │ ├── Definition/ # 枚举、结构体、常量
│ │ ├── Entity/ # 实体逻辑与数据
│ │ ├── Procedure/ # 流程状态机
│ │ ├── Simulation/ # 数据导向模拟层
│ │ ├── UI/ # UIController/UseCase/View 分离)
│ │ └── CustomComponent/ # 自定义功能组件
│ ├── Scenes/ # Menu/Main/Game 场景
│ └── DataTables/ # 数据表资源
├── Launcher.unity # 启动场景
├── Plugins/ # 第三方插件DOTween
└── Tests/ # 单元测试
```
可选命令行启动(请替换为本机 Unity 路径): ## 技术栈
- Unity 2022.3.62f3c1 LTS
- Input System`com.unity.inputsystem`
- Universal Render Pipeline`com.unity.render-pipelines.universal`
- TextMeshPro`com.unity.textmeshpro`
- DOTween动画
- Unity Test FrameworkNUnit
- Newtonsoft.Json配置序列化
## 运行方式
1. Unity Hub 打开项目根目录
2. 确认 Unity 版本为 `2022.3.62f3c1`
3. 打开 `Assets/Launcher.unity` 并 Play
CLI 方式:
```powershell ```powershell
Unity -projectPath . Unity -projectPath .
``` ```
## 项目结构
主要目录说明:
- `Assets/GameMain/`: 游戏业务代码、场景和内容资源。
- `Assets/GameFramework/`: 通用框架与编辑器扩展。
- `Assets/Plugins/`: 第三方插件(如 DOTween
- `Assets/Resources/`: 运行时通过 Resources 加载的资源。
- `Assets/StreamingAssets/`: 原样打包到客户端的数据。
- `Json/`、`数据表/`: 配置与数据表。
- `Tools/`: 本地工具脚本和处理流程。
请避免直接修改自动生成目录:
- `Library/`
- `Temp/`
- `Logs/`
- `obj/`
## 代码规范 ## 代码规范
- C# 使用 4 空格缩进。 - C# 4 空格缩进K&R 大括号风格
- 大括号与声明同行K&R 风格)。 - 单文件单主类型,文件名与类型名一致
- 单文件单主类型,文件名与类型名一致。 - `PascalCase` 公有成员,`camelCase` 局部变量
- 公有类型/成员使用 `PascalCase` - Inspector 暴露用 `[SerializeField] private`
- 局部变量/参数使用 `camelCase` - 保持 `.meta` 文件与资源同步
- 需要在 Inspector 暴露的私有字段使用 `[SerializeField]`
## 测试 ## 测试
项目已包含 `com.unity.test-framework` ```powershell
# EditMode 测试
Unity -batchmode -projectPath . -runTests -testPlatform editmode -quit -logFile Logs/editmode-tests.log
```
建议约定: 测试目录:`Assets/Tests/`、`Assets/Tests/Simulation/EditMode/`、`Assets/Tests/Simulation/PlayMode/`
- 测试目录:`Assets/Tests/` 或 `Assets/<Module>/Tests/` ## 协作
- 测试文件命名:`*Tests.cs`
- 使用 NUnit `[Test]` 编写用例
- 通过 Unity Test Runner 运行:`Window > General > Test Runner`
## 常用依赖Packages - 提交信息使用简短祈使句,建议包含模块上下文(如 `UI: Fix shop item refresh logic`
- PR 包含变更摘要、测试说明、关联任务/Issue
当前项目使用的关键包包括: - UI/视觉改动附上截图或录屏
- `com.unity.inputsystem`
- `com.unity.render-pipelines.universal`
- `com.unity.textmeshpro`
- `com.unity.ugui`
- `com.unity.nuget.newtonsoft-json`
- `com.unity.test-framework`
完整依赖见 `Packages/manifest.json`
## 协作与提交流程(建议)
- 提交信息使用简短祈使句,例如:`UI: Fix shop item refresh logic`
- PR 建议包含:
- 变更摘要
- 测试说明
- 关联任务/Issue
- UI 改动截图或录屏
## 配置注意事项
- 修改依赖时,请同时关注:
- `Packages/manifest.json`
- `ProjectSettings/`
- 大体积二进制资源需与对应 `.meta` 一并提交。
## 许可证
当前仓库未提供许可证文件。若需开源或外发,请先补充 `LICENSE`