Table of Contents
◾️ Steps to Create Original Challenges (Coding Style)
Install Marine.
Use Marine to implement your coding Question.
Submit the generated project to the Track operations team.
◾️ What Is Marine?
Marine is a tool that lets you create Track content and preview its behavior in real time.
For details, see the here
◾️ Overview
Marine is a real-time preview tool for Track content.
While authoring, you can view and run the content exactly as it will appear on Track—tests, blanks, and all interactive elements behave identically.
◾️ Basic Usage
At the top of the screen, open the first select box. • All directories under
contents/that contain eithertrack.ymlorbook.ymlare listed.Choose a directory. • The lower workspace refreshes and displays that content.
Edit any file in the directory, then click
Reloadto apply changes to the workspace.
Workspace behavior matches Track, so you can run tests, fill blanks, and verify interactions.
◾️ Header Layout
UI Element | Purpose |
| Lists directories that include |
| Choose the programming language for CLI / Function challenges |
| Select which sample solution to display |
| Check whether the content can be registered on Track |
◾️ Menu Options
Option | Description |
| Rescan |
| Create a ZIP of the entire content (including |
| Toggle developer-mode view • See each content’s developer guide for details |
| Switch the execution environment • Useful when testing new language versions |
| Switch between English / Japanese in README files and test cases |
◾️ Writing Content
Marine supports authoring these content types:
Challenge
Quiz
Book
◾️ Overview
Track Books are designed so that almost everything can be authored in Markdown.
This document describes every file and option used to build a Book.
◾️ A Minimal Book
The smallest possible Book contains just two files:
contents/ └─ my-book/ ├─ book.yml └─ chapter1.md
book.ymltitle: Introduction to CODEPREP detail: A sample Book for CODEPREP. chapters: - chapter1.md
chapter1.md# First Chapter Sample chapter definition. ## First Section Sample section definition. Fill the blank so the HTML prints “Hello World”. ### main(index.html)
<p>Hello ${World}</p>
When viewed in Marine you will see
<p>Hello [ ]</p>.◾️
book.ymlbook.ymldefines the whole Book.Required keys
•
title– Book title•
detail– Short description (use|for multi-line)•
chapters– List of chapter definition files (order = chapter order)Optional keys
cover,files,images,download,playground,webStorage,keepSession,timeInMinutes,env,shared,carryOver,initialize,sandbox,answerSectionTitle & Detail
detail: | Multi-line description written with the pipe syntax.
Chapters
List the Markdown files that define each chapter.
File names are arbitrary; only the path matters.
Common Optional Keys (excerpt)
filesFiles that appear as tabs in every section
•
imagesImages referenced only in explanations•
download/playgroundtrue(default) orfalse•
webStorageEnablesessionStorage/localStorage•
keepSessionKeep a remote shell alive between commands•
envDocker image and caches; seeenv.md•
answerSectionnone,each, orlast(defaulteach)•
carryOverFiles whose content is carried to later sections•
initializeFiles & commands executed once when a remote session starts•
sandboxGlobal sandbox files and options◾️ Chapter Definition File <a name="chapter"></a>
Headings
#= chapter title (must be unique, first line)##= section title (multiple allowed)###= sub-section:main,hint,preview, etc.Comments
Lines starting with
//are ignored during compilation (still visible on GitHub).HTML comments (
<!-- -->) are not ignored.Section Structure & Types
A section may contain one of these main blocks: •
mainCode-fill blanks•
exerciseClient / Remote / CLI tasks•
playgroundFree-edit area•
sandboxPersistent free-edit area•
mcqMultiple-choice question•
fibFill-in-blank inside Markdown•
previewRead-only explanationmain,exercise,playground,fib,mcq,sandboxare mutually exclusive.Main Block Example
### main(index.html)
<${p}>Hello World<${\\/p}>
${answer}rules• Plain string → exact match
• Starts with
/→ RegExp• Starts with
{→ JSON rule• Starts with
\\\\→ escape the first charUse
- prefix:under the### mainline to replace$with another symbol.Hint / Tips
### hint …hidden hint… ### tips …alternative hint…
Answer
Use
### answerfor explanations; they can be aggregated later.Files
### files - main.css - [main.js](files/app.js)
MCQ
Markdown list with
- [ ]items; correct answers- [x].FIB
Markdown containing
${blank}. Supports dropdowns:### dropdown - byte - 1 byte - 1 kilobyte
Preview & Remote Execution
If
remoteis defined, a console appears.If not, Track auto-chooses:
Defined
previewindex.htmlinmainindex.htmlinfilesOtherwise show
mainas plain text.
remotesupportscommand,build,prepare,cwd,after,previewFile,mode: html | console, and special token@main.◾️ Advanced Functions <a name="advanced"></a>
Reusing Previous Sections
Insert the completed code from a prior section:
${func:section(prev)}prev/ numeric index /aliasare supported.Split by blank lines or markers:
${func:section-before-blank(prev)} …new code… ${func:section-after-blank(prev)}func:markEmbed a marker to target insertion points:
// ${func:mark("next")}Then reference:
${func:section-after(prev,"next")}External File Helpers
func:file(...),file-before-blank, etc. mirror thesectionhelpers.◾️ Exercise / Playground / Sandbox <a name="exercise"></a>
Client Exercise
### exercise - mode: client - file: [main.js](assets/1-2.main.js) - test: assets/1-2.test.js
testmust output TAP (e.g., Mocha +-R tap).Remote Exercise
### exercise - mode: remote - build: javac Main.java - file: [Main.java](assets/Main.java) - hidden: [Tests.java](assets/Tests.java) - command: java Tests
CLI Exercise
Similar to a coding challenge;
mode: cli.Importing a Challenge
book.ymlshared: - exercise1: ../../challenges/sample
chapter.md- mode: coding - reference: exercise1
Partial Edit Regions
Wrap editable code with:
// EXERCISE_BEGIN_EDIT …user edits here… // EXERCISE_END_EDIT
Playground
### playground - mode: client - file: index.html - file: main.js
Sandbox
Global files in
book.yml, per-section display with:### sandbox - show: index.html
◾️ Debug / Env / DevMode <a name="debug"></a>
Debug
Define like challenges:
### debug - command: cat $f | $c - raw:[Case 1]a b c - file:[Case 2]test/in/sample.txt
Per-Section
env### env - imageName: java - cacheDirs: /root/.m2
DevMode (Marine)
Saves user edits between reloads
• Enforces solving order (with skip)
• Enables carry-over testing
Use “Clean up user code” to reset.
AI
◾️ Overview
An AI Challenge covers programming problems that are not suitable for Track’s online editor because they
handle large-scale datasets, or
require extensive computing resources.
Typical use cases involve machine-learning tasks.
◾️ What Candidates Can Do in an AI Challenge
Read the problem statement (
README.md) in the browser.Download the data archive required to solve the problem. • File sizes can range from several hundred MB to multiple GB.
Upload an answer file in the format specified by the author (e.g., JSON or CSV). • Each upload triggers automatic grading.
By default, Track shows the raw score after grading.
If
hideRawScore: trueis set intrack.yml, Track instead shows a rank (S / A / B / C / D / E).◾️ What Authors Must Prepare
Write the problem statement in
README.md.Package all dataset files into a
.tar.gzor.zip, then host the archive at any publicly downloadable URL.Build a judge application that • reads the candidate’s upload, • evaluates it, and • prints a numeric score to
STDOUT. If the judge itself is large (e.g., needs extra data), you may also host it as a.tar.gzat a public URL.
Unlike coding challenges—whose entire content lives in Git—AI Challenges keep large files in external storage.
During import, Track copies these files into private storage, so candidates never access the public URLs directly.
◾️ Judge Application Specification
Runs inside Track’s Docker execution environment.
• May be written in any language supported by Track.
• Obtain the uploaded filename via the
UPLOAD_FILENAMEenvironment variable.• Output only one numeric score to
STDOUT; that value becomes the candidate’s score.• On invalid input, write a clear error message to
STDERR; the message is shown to the candidate as-is.◾️
scoreTypeDifferent tasks use different metrics—accuracy, recall, RMSE, etc.
Define how Track should rank scores with
scoreTypeintrack.yml.
Key | Required | Description |
| Yes |
|
| No | Upper bound of possible scores |
| No | Lower bound of possible scores |
scoreType: rankBy: high # higher score = better max: 1 min: 0
◾️ rankedRange
Map raw scores to ranks S-E.
rankedRange: S: 0.80.. A: 0.75..0.80 B: 0.65..0.75 C: 0.50..0.65 D: 0.40..0.50 E: ..0.40
The first matching range takes precedence if ranges overlap.
◾️ track.yml
Key | Type | Required | Purpose |
| String | Yes | Must be |
| Array\<String\> | — | Public URL(s) of dataset archive(s) ( |
| Array\<String\> | — | Files to bundle but hide from candidates (e.g., judge source). Accepts file paths or public URLs |
| Array\<String\> | — | Image files referenced from |
| String | Yes | Command to run the judge application |
| Array\<String\> | — | Build commands if the judge needs compilation |
| Hash | Yes | Docker image and settings (see |
| Hash | — | Scoring rules (see above) |
| Boolean | — | If |
Unknown keys are ignored.
Example:
type: AI test: node test.js images: - images/nue.jpg readonly: - <https://example.com/ai-challenge/data.tar.gz> hide: - <https://example.com/ai-challenge/judge.tar.gz> scoreType: rankBy: high min: 0 max: 1 envConf: imageName: givery/track-base2
◾️ README.md
Write the full problem statement in Markdown.
Official challenges follow a template; custom challenges may use any structure.
Markdown extensions are available—see the extension guide.
README.md itself is not listed in track.yml.
◾️ Internationalization
During compilation, Track selects the README file based on the language option (ja / en):
Language | Preferred File | Fallback |
|
|
|
|
|
|
Provide both files for bilingual challenges.
Inside the judge application, read the CHALLENGE_LANGUAGE environment variable to localize error messages.
App
◾️ Overview
An Application Challenge lets candidates build and submit a full Web application and have it executed and evaluated on Track.
Typical frameworks include but are not limited to:
Ruby on Rails (Ruby)
Express (Node.js)
Spring (Java)
◾️ Candidate Workflow
The problem statement appears on the left-hand side.
The candidate downloads a ZIP starter kit, unpacks it locally, and builds the required Web application. • Their local environment may need tools such as Java, Node.js, or Docker.
When finished, they compress the entire project into a ZIP file and upload it.
Track starts the Web application inside its environment and runs the author-supplied tests, allowing the candidate to verify: • whether the app runs on Track, and • whether it passes all tests.
◾️ Screen Layout
The interface has four areas:
Left Pane – Problem statement and a toggle to show the file tree of the uploaded ZIP.
Right-Top Pane – Displays the file selected in the tree, with diffs against the starter kit or last upload.
Right-Bottom Pane – Three tabs: •
Status– Current runtime state of the app. •Test Cases– Results of the latest test run (may take several minutes). •Log– Logs from the last app startup.Footer Buttons •
Start / Stop– Launch or halt the app using the latest ZIP. •Upload– Upload a ZIP file. •Download– Download the starter kit.
◾️ Using Marine to Create an Application Challenge
Certain authoring features call AWS APIs and therefore require AWS credentials.
Add the following environment variables to your
docker-compose.ymlwhen running Marine:environment: AWS_ACCESS_KEY_ID: xxxx AWS_SECRET_ACCESS_KEY: xxxx
Without these keys you can still:
validate
application.yml,preview
README.md, andview the starter kit / solution in Marine.
You cannot:
register the challenge on Track,
run tests,
upload ZIPs, or
start / stop the app in the Track environment.
◾️ Application Challenge Creation Workflow
Create
application.yml.Write
README.md.Prepare the starter kit inside the
appdirectory.Create a sample solution in the
solutiondirectory.Write startup and test scripts in the
hiddendirectory.If test output is not already TAP, add a converter in the
conversiondirectory.
◾️ Creating
application.ymlapplication.ymlis specific to Application Challenges.
Key | Description |
| Fixed to |
| Command executed for RUN (testing). Required. Usually |
| Images referenced from |
| ECS task size (optional)<br>• |
| Convert test output to TAP (optional)<br>• |
| Number of test cases (required or optional depending on compiler options) |
| File names that must exist in the uploaded ZIP |
| Docker Compose–compatible service definitions (required)<br>• Valid keys: |
◾️ Writing README.md
Write the full assignment in Markdown.
Official content follows a template; custom content may use any format.
Markdown extensions are available.
◾️ Preparing the Starter Files (app Directory)
Place the entire starter project here.
Any
.gitignoreinsideappis respected when generating the downloadable ZIP.You may include
docker-compose.ymlto simplify setup; see “HOST_NAME” and “Docker Image Names” below.
◾️ Providing a Sample Solution (solution Directory)
Include only the files that differ from app.
For solution preview, Track overlays solution on top of app before startup.
◾️ Authoring Startup / Test Scripts (hidden Directory)
hidden is not distributed to candidates.
Typical files:
up.sh– start the Web server.run.sh– execute the test suite.
Each script must begin by downloading and unpacking the candidate ZIP via $SUBMISSION_URL:
wget -q -O temp.zip "$SUBMISSION_URL" unzip -o temp.zip rm temp.zip echo "Succeed to extract zip" # Start server or run tests below
You may also place additional private files (e.g., test data) here.
◾️ Converting Test Output to TAP (conversion Directory)
If your test runner cannot output TAP directly, place a converter script here and reference it via application.yml#conversion.
Otherwise, omit this directory.
◾️ Additional Topics
solution.md
If you provide an explanatory write-up, add it as solution.md.
Marine and the web UI do not display this file.
Docker Image Names
In
application.yml, prefer ECR images to avoid Docker Hub’s 100 pulls / 6 h limit.In
docker-compose.yml(insideapp), use Docker Hub images because candidates cannot access ECR.
HOST_NAME
Fargate does not support Compose-style service aliases (db:5432).
Instead, containers must connect via 127.0.0.1:<port>.
Add an environment variable in the Main Service whose value is 127.0.0.1, and instruct candidates to reference that variable both locally and on Track.
◾️ Internationalization
During compilation, Track selects README files as follows:
Language Option | Used File | Fallback |
|
|
|
|
|
|
The same rule applies to solution.md.
Other files are not internationalized.
◾️ Detailed Workflow: Build & Upload with Marine
A complete guide to creating coding challenges with Marine is available here:
How to Build a Track Challenge
■ Overview
A Challenge asks users to write code and validates the result with unit tests.
Four categories exist:
• CLI (
cli) language-agnostic CLI apps• Function (
function) language-agnostic single functions• Coding (
coding) language-specific implementations• Frontend (
frontend) browser-side development tasks■ Challenge Types
Type | Core Idea |
CLI | Read parameters via CLI args or STDIN, print to STDOUT. Any language permitted. |
Function | Implement a predefined function. Args / return types: |
Coding | Fill in parts of starter code in a specific language. |
Frontend | Build a front-end app with a selectable framework / template. |
■ Official vs. Custom Challenges
Official Challenges are provided by Track and, depending on your plan, are available to all organizations.
Custom Challenges are created by an organization and are visible only to that organization.
Official Challenges must include at least one full-score solution to guarantee quality; Custom Challenges only need a compilable, non-zero solution.
■ Testing Framework
Track relies on TAP output.
• CLI / Function: write tests in JavaScript with mocha -R tap (helper library codecheck available).
• Coding: use any TAP-capable framework included in your Docker image.
ok n … = pass not ok n … = fail
■ track.yml Keys
Mandatory
type, test, testcases
Common optional keys
editable, readonly, hide, testExclude, images, allowNewFile, build, initialize, mainFile, languages, templateType, envConf, evaluationPoint, solutions, namedSolutions, shared, debug, watch, function, functionInputDir, systemTag
Example
type: coding allowNewFile: true test: mocha editable: - answer.md readonly: - test/base.js - test/cases.json hide: - test/secret* testExclude: - answer.md languages: all solutions: - solution.js testcases: open: 8 secret: 8 evaluationPoint: File IO: Check file operations Error Handling: Check exception handling
Write the problem statement in Markdown.
Track supports several Markdown extensions (see the extension guide).
Do not list README.md in track.yml.
■ Open Tests vs. Secret Tests
Files under
readonly→ open tests (visible to users)
• Files under hide → secret tests (hidden)
testcases.open / testcases.secret must match the actual counts; mismatches raise warnings.
Template shown to users for free-text comments after coding.
If absent, a blank page is displayed.
Author-facing explanation of the model solution.
Visible only to administrators, never to candidates.
■ Internationalization
On compilation Track chooses files as follows:
• ja → README_ja.md else README.md
• en → README_en.md else README.md
solution.md follows the same rule.
You may add track_en.yml / track_ja.yml; top-level keys in these files override the main track.yml.
At runtime the environment variable CHALLENGE_LANGUAGE contains ja or en—use it to localize error messages.
■ BEGIN_CHALLENGE / END_CHALLENGE
In any file listed in editable or readonly, code lines between
BEGIN_CHALLENGE and END_CHALLENGE are removed before delivery to candidates (but remain visible in Marine Dev Mode).
■ solutions
Register model answers.
Syntax: destName:sourcePath (dest = filename seen during testing).
Any solution.* file in the root folder is auto-registered.
CLI / Function: one solution.ext per language (plus additional files such as package.json if needed).
Coding: all listed files overwrite candidates’ versions during solution verification.
■ namedSolutions
Define multiple solution sets with names.
Frontend Challenges use the template name as name.
namedSolutions: - name: react-ts files: - src/App.tsx:solution/react-ts/src/App.tsx - index.html:solution/react-ts/index.html
■ Docker Images
See env.md for all available images.
Coding Challenges must set envConf.
CLI / Function templates automatically select the correct image.
■ CLI Templates
Supported languages: c, cpp, csharp, java, nodejs, php, python, python3, scala, perl, ruby, go, swift, kotlin, rust.
Provide a Docker image and template to add more.
■ Function Definition (Function Challenges)
Define in track.yml:
function: name: myFunction arguments: - name: a type: string - name: b type: int - name: c type: string[] result: type: string[]
Supported types: int, string, float, boolean plus 1-D / 2-D arrays.
Strings cannot contain whitespace; int range ±9 007 199 254 740 991.
■ Writing Tests for CLI Challenges
Use codecheck with mocha:
const codecheck = require("codecheck"); const app = codecheck.consoleApp(process.env.APP_COMMAND); codecheck .readLines("test/in/input.txt") .then(app.run) .then(result => { result.stdout.should.eql("expected output"); });APP_COMMAND contains the user’s executable.
■ templateType
CLI templates:
default (Args) | stdin (STDIN) | plain (no input)
If omitted, default is used.
■ Writing Tests for Function Challenges
Define input JSON per test case under functionInputDir; Track converts it to a plain text file:
input1.in.json
{ "a": "foo", "b": 1, "c": ["bar", "baz"] }→ input1.in
foo 1 2 bar baz
Use codecheck or custom scripts to pass the values to the function and verify the return value.
■ initialize
Run one-time setup when the editor opens.
initialize: files: - test/judge.cpp commands: - g++ -std=c++11 -o test/judge test/judge.cpp - rm test/judge.cpp showConsole: true
■ evaluationPoint
Group tests by evaluation criteria.
evaluationPoint: FileIO: Test file operations ErrorHandle: Test error processing
Prefix a TAP test title with [FileIO] etc. to assign it.
■ shared
Reuse files across multiple challenges.
readonly: - sql/init.sql shared: - sql:../shared/sql
alias example:
shared: - sharedTest:../shared/test:test - sharedJS:../shared/js:_
_ places files in the root directory.
■ debug
Provide quick test commands for candidates.
debug: command: cat $f | $c raw:[Case 1]a b c file:[Case 2]test/in/sample.txt
Placeholders
$f = input file $* = args list $c = user command
codecheck-based challenges generate debug entries automatically.
■ systemTag
Flags that modify Track behavior:
• alwaysSave Evaluate on every local run (needed for server-side deploy tasks).
• incompleteSolution Allow official challenges without a full-score solution.
• simpleBuild Skip heavy build steps (e.g., no npm install).
(End of document)
After creating your challenge according to the guide, send the entire project to the Track Customer Success team.
