feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
'use client';
|
|
|
|
|
|
|
|
|
|
import * as React from 'react';
|
2025-11-18 23:29:18 +01:00
|
|
|
import { Music, Plus, Upload, Trash2, Settings, Download } from 'lucide-react';
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
import { PlaybackControls } from './PlaybackControls';
|
2025-11-19 00:22:52 +01:00
|
|
|
import { MasterControls } from '@/components/controls/MasterControls';
|
2025-11-19 01:40:04 +01:00
|
|
|
import { FrequencyAnalyzer } from '@/components/analysis/FrequencyAnalyzer';
|
|
|
|
|
import { Spectrogram } from '@/components/analysis/Spectrogram';
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
import { PhaseCorrelationMeter } from '@/components/analysis/PhaseCorrelationMeter';
|
|
|
|
|
import { LUFSMeter } from '@/components/analysis/LUFSMeter';
|
|
|
|
|
import { AudioStatistics } from '@/components/analysis/AudioStatistics';
|
2025-11-17 20:03:40 +01:00
|
|
|
import { ThemeToggle } from '@/components/layout/ThemeToggle';
|
|
|
|
|
import { CommandPalette } from '@/components/ui/CommandPalette';
|
2025-11-18 16:15:04 +01:00
|
|
|
import { GlobalSettingsDialog } from '@/components/settings/GlobalSettingsDialog';
|
2025-11-18 23:29:18 +01:00
|
|
|
import { ExportDialog, type ExportSettings } from '@/components/dialogs/ExportDialog';
|
2025-11-18 07:46:27 +01:00
|
|
|
import { Button } from '@/components/ui/Button';
|
2025-11-17 20:03:40 +01:00
|
|
|
import type { CommandAction } from '@/components/ui/CommandPalette';
|
2025-11-17 21:57:31 +01:00
|
|
|
import { useMultiTrack } from '@/lib/hooks/useMultiTrack';
|
|
|
|
|
import { useMultiTrackPlayer } from '@/lib/hooks/useMultiTrackPlayer';
|
2025-11-17 22:17:09 +01:00
|
|
|
import { useEffectChain } from '@/lib/hooks/useEffectChain';
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
import { useToast } from '@/components/ui/Toast';
|
2025-11-17 21:57:31 +01:00
|
|
|
import { TrackList } from '@/components/tracks/TrackList';
|
|
|
|
|
import { ImportTrackDialog } from '@/components/tracks/ImportTrackDialog';
|
2025-11-17 22:17:09 +01:00
|
|
|
import { formatDuration } from '@/lib/audio/decoder';
|
2025-11-18 13:05:05 +01:00
|
|
|
import { useHistory } from '@/lib/hooks/useHistory';
|
2025-11-18 14:44:15 +01:00
|
|
|
import { useRecording } from '@/lib/hooks/useRecording';
|
2025-11-18 18:18:17 +01:00
|
|
|
import type { EffectType } from '@/lib/audio/effects/chain';
|
2025-11-18 13:05:05 +01:00
|
|
|
import {
|
|
|
|
|
createMultiTrackCutCommand,
|
|
|
|
|
createMultiTrackCopyCommand,
|
|
|
|
|
createMultiTrackDeleteCommand,
|
|
|
|
|
createMultiTrackPasteCommand,
|
|
|
|
|
createMultiTrackDuplicateCommand,
|
|
|
|
|
} from '@/lib/history/commands/multi-track-edit-command';
|
|
|
|
|
import { extractBufferSegment } from '@/lib/audio/buffer-utils';
|
2025-11-18 23:29:18 +01:00
|
|
|
import { mixTracks, getMaxTrackDuration } from '@/lib/audio/track-utils';
|
2025-11-19 09:08:17 +01:00
|
|
|
import { audioBufferToWav, audioBufferToMp3, downloadArrayBuffer } from '@/lib/audio/export';
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
|
|
|
|
|
export function AudioEditor() {
|
2025-11-17 21:57:31 +01:00
|
|
|
const [importDialogOpen, setImportDialogOpen] = React.useState(false);
|
2025-11-17 22:17:09 +01:00
|
|
|
const [selectedTrackId, setSelectedTrackId] = React.useState<string | null>(null);
|
2025-11-17 15:44:29 +01:00
|
|
|
const [zoom, setZoom] = React.useState(1);
|
2025-11-18 07:20:29 +01:00
|
|
|
const [masterVolume, setMasterVolume] = React.useState(0.8);
|
2025-11-19 00:08:36 +01:00
|
|
|
const [masterPan, setMasterPan] = React.useState(0);
|
|
|
|
|
const [isMasterMuted, setIsMasterMuted] = React.useState(false);
|
2025-11-18 13:05:05 +01:00
|
|
|
const [clipboard, setClipboard] = React.useState<AudioBuffer | null>(null);
|
2025-11-18 14:44:15 +01:00
|
|
|
const [recordingTrackId, setRecordingTrackId] = React.useState<string | null>(null);
|
2025-11-18 15:44:13 +01:00
|
|
|
const [punchInEnabled, setPunchInEnabled] = React.useState(false);
|
|
|
|
|
const [punchInTime, setPunchInTime] = React.useState(0);
|
|
|
|
|
const [punchOutTime, setPunchOutTime] = React.useState(0);
|
|
|
|
|
const [overdubEnabled, setOverdubEnabled] = React.useState(false);
|
2025-11-18 16:15:04 +01:00
|
|
|
const [settingsDialogOpen, setSettingsDialogOpen] = React.useState(false);
|
2025-11-18 23:29:18 +01:00
|
|
|
const [exportDialogOpen, setExportDialogOpen] = React.useState(false);
|
|
|
|
|
const [isExporting, setIsExporting] = React.useState(false);
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
const [analyzerView, setAnalyzerView] = React.useState<'frequency' | 'spectrogram' | 'phase' | 'lufs' | 'stats'>('frequency');
|
2025-11-17 20:03:40 +01:00
|
|
|
|
2025-11-17 22:17:09 +01:00
|
|
|
const { addToast } = useToast();
|
2025-11-17 20:03:40 +01:00
|
|
|
|
2025-11-18 13:05:05 +01:00
|
|
|
// Command history for undo/redo
|
|
|
|
|
const { execute: executeCommand, undo, redo, state: historyState } = useHistory();
|
|
|
|
|
const canUndo = historyState.canUndo;
|
|
|
|
|
const canRedo = historyState.canRedo;
|
|
|
|
|
|
2025-11-18 14:44:15 +01:00
|
|
|
// Recording hook
|
|
|
|
|
const {
|
|
|
|
|
state: recordingState,
|
2025-11-18 15:51:38 +01:00
|
|
|
settings: recordingSettings,
|
2025-11-18 14:44:15 +01:00
|
|
|
startRecording,
|
|
|
|
|
stopRecording,
|
|
|
|
|
requestPermission,
|
2025-11-18 15:51:38 +01:00
|
|
|
setInputGain,
|
|
|
|
|
setRecordMono,
|
|
|
|
|
setSampleRate,
|
2025-11-18 14:44:15 +01:00
|
|
|
} = useRecording();
|
|
|
|
|
|
2025-11-17 22:17:09 +01:00
|
|
|
// Multi-track hooks
|
|
|
|
|
const {
|
|
|
|
|
tracks,
|
2025-11-18 18:13:38 +01:00
|
|
|
addTrack: addTrackOriginal,
|
|
|
|
|
addTrackFromBuffer: addTrackFromBufferOriginal,
|
2025-11-17 22:17:09 +01:00
|
|
|
removeTrack,
|
|
|
|
|
updateTrack,
|
|
|
|
|
clearTracks,
|
|
|
|
|
} = useMultiTrack();
|
2025-11-17 17:08:31 +01:00
|
|
|
|
2025-11-18 18:13:38 +01:00
|
|
|
// Track whether we should auto-select on next add (when project is empty)
|
|
|
|
|
const shouldAutoSelectRef = React.useRef(true);
|
|
|
|
|
|
|
|
|
|
React.useEffect(() => {
|
|
|
|
|
// Update auto-select flag based on track count
|
|
|
|
|
shouldAutoSelectRef.current = tracks.length === 0;
|
|
|
|
|
}, [tracks.length]);
|
|
|
|
|
|
|
|
|
|
// Wrap addTrack to auto-select first track when adding to empty project
|
|
|
|
|
const addTrack = React.useCallback((name?: string) => {
|
|
|
|
|
const shouldAutoSelect = shouldAutoSelectRef.current;
|
|
|
|
|
const track = addTrackOriginal(name);
|
|
|
|
|
if (shouldAutoSelect) {
|
|
|
|
|
setSelectedTrackId(track.id);
|
|
|
|
|
shouldAutoSelectRef.current = false; // Only auto-select once
|
|
|
|
|
}
|
|
|
|
|
return track;
|
|
|
|
|
}, [addTrackOriginal]);
|
|
|
|
|
|
|
|
|
|
// Wrap addTrackFromBuffer to auto-select first track when adding to empty project
|
|
|
|
|
const addTrackFromBuffer = React.useCallback((buffer: AudioBuffer, name?: string) => {
|
|
|
|
|
console.log(`[AudioEditor] addTrackFromBuffer wrapper called: ${name}, shouldAutoSelect: ${shouldAutoSelectRef.current}`);
|
|
|
|
|
const shouldAutoSelect = shouldAutoSelectRef.current;
|
|
|
|
|
const track = addTrackFromBufferOriginal(buffer, name);
|
|
|
|
|
console.log(`[AudioEditor] Track created: ${track.name} (${track.id})`);
|
|
|
|
|
if (shouldAutoSelect) {
|
|
|
|
|
console.log(`[AudioEditor] Auto-selecting track: ${track.id}`);
|
|
|
|
|
setSelectedTrackId(track.id);
|
|
|
|
|
shouldAutoSelectRef.current = false; // Only auto-select once
|
|
|
|
|
}
|
|
|
|
|
return track;
|
|
|
|
|
}, [addTrackFromBufferOriginal]);
|
|
|
|
|
|
2025-11-18 23:29:18 +01:00
|
|
|
// Track which parameters are being touched (for touch/latch modes)
|
|
|
|
|
const [touchedParameters, setTouchedParameters] = React.useState<Set<string>>(new Set());
|
|
|
|
|
const [latchTriggered, setLatchTriggered] = React.useState<Set<string>>(new Set());
|
|
|
|
|
|
|
|
|
|
// Track last recorded values to detect changes
|
|
|
|
|
const lastRecordedValuesRef = React.useRef<Map<string, { value: number; time: number }>>(new Map());
|
|
|
|
|
|
|
|
|
|
// Automation recording callback
|
|
|
|
|
const handleAutomationRecording = React.useCallback((
|
|
|
|
|
trackId: string,
|
|
|
|
|
laneId: string,
|
|
|
|
|
currentTime: number,
|
|
|
|
|
value: number
|
|
|
|
|
) => {
|
|
|
|
|
const track = tracks.find(t => t.id === trackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
const lane = track.automation.lanes.find(l => l.id === laneId);
|
|
|
|
|
if (!lane) return;
|
|
|
|
|
|
|
|
|
|
const paramKey = `${trackId}-${laneId}`;
|
|
|
|
|
let shouldRecord = false;
|
|
|
|
|
|
|
|
|
|
// Determine if we should record based on mode
|
|
|
|
|
switch (lane.mode) {
|
|
|
|
|
case 'write':
|
|
|
|
|
// Always record in write mode
|
|
|
|
|
shouldRecord = true;
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
case 'touch':
|
|
|
|
|
// Only record when parameter is being touched
|
|
|
|
|
shouldRecord = touchedParameters.has(paramKey);
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
case 'latch':
|
|
|
|
|
// Record from first touch until stop
|
|
|
|
|
if (touchedParameters.has(paramKey)) {
|
|
|
|
|
setLatchTriggered(prev => new Set(prev).add(paramKey));
|
|
|
|
|
}
|
|
|
|
|
shouldRecord = latchTriggered.has(paramKey);
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
default:
|
|
|
|
|
shouldRecord = false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (!shouldRecord) return;
|
|
|
|
|
|
|
|
|
|
// Throttle recording to avoid creating too many automation points
|
|
|
|
|
// This doesn't prevent recording, just limits frequency
|
|
|
|
|
const lastRecorded = lastRecordedValuesRef.current.get(paramKey);
|
|
|
|
|
|
|
|
|
|
if (lastRecorded && currentTime - lastRecorded.time < 0.1) {
|
|
|
|
|
// Check if value has changed significantly
|
|
|
|
|
const valueChanged = Math.abs(lastRecorded.value - value) > 0.001;
|
|
|
|
|
if (!valueChanged) {
|
|
|
|
|
// Skip if value hasn't changed and we recorded recently
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Update last recorded value
|
|
|
|
|
lastRecordedValuesRef.current.set(paramKey, { value, time: currentTime });
|
|
|
|
|
|
|
|
|
|
// Create new automation point
|
|
|
|
|
const newPoint = {
|
|
|
|
|
id: `point-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
|
|
|
|
|
time: currentTime,
|
|
|
|
|
value,
|
|
|
|
|
curve: 'linear' as const,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
// In write mode, remove existing points near this time (overwrites)
|
|
|
|
|
const updatedPoints = lane.mode === 'write'
|
|
|
|
|
? [...lane.points.filter(p => Math.abs(p.time - currentTime) > 0.05), newPoint]
|
|
|
|
|
: [...lane.points, newPoint];
|
|
|
|
|
|
|
|
|
|
updatedPoints.sort((a, b) => a.time - b.time);
|
|
|
|
|
|
|
|
|
|
// Update the lane with new points
|
|
|
|
|
const updatedLanes = track.automation.lanes.map(l =>
|
|
|
|
|
l.id === laneId ? { ...l, points: updatedPoints } : l
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
updateTrack(trackId, {
|
|
|
|
|
automation: {
|
|
|
|
|
...track.automation,
|
|
|
|
|
lanes: updatedLanes,
|
|
|
|
|
},
|
|
|
|
|
});
|
|
|
|
|
}, [tracks, updateTrack, touchedParameters, latchTriggered]);
|
|
|
|
|
|
|
|
|
|
// Helper to mark parameter as touched (for touch/latch modes)
|
|
|
|
|
const setParameterTouched = React.useCallback((trackId: string, laneId: string, touched: boolean) => {
|
|
|
|
|
const paramKey = `${trackId}-${laneId}`;
|
|
|
|
|
setTouchedParameters(prev => {
|
|
|
|
|
const next = new Set(prev);
|
|
|
|
|
if (touched) {
|
|
|
|
|
next.add(paramKey);
|
|
|
|
|
} else {
|
|
|
|
|
next.delete(paramKey);
|
|
|
|
|
}
|
|
|
|
|
return next;
|
|
|
|
|
});
|
|
|
|
|
}, []);
|
feat: complete Phase 7.4 - real-time track effects system
Implemented comprehensive real-time effect processing for multi-track audio:
Core Features:
- Per-track effect chains with drag-and-drop reordering
- Effect bypass/enable toggle per effect
- Real-time parameter updates (filters, dynamics, time-based, distortion, bitcrusher, pitch, timestretch)
- Add/remove effects during playback without interruption
- Effect chain persistence via localStorage
- Automatic playback stop when tracks are deleted
Technical Implementation:
- Effect processor with dry/wet routing for bypass functionality
- Real-time effect parameter updates using AudioParam setValueAtTime
- Structure change detection for add/remove/reorder operations
- Stale closure fix using refs for latest track state
- ScriptProcessorNode for bitcrusher, pitch shifter, and time stretch
- Dual-tap delay line for pitch shifting
- Overlap-add synthesis for time stretching
UI Components:
- EffectBrowser dialog with categorized effects
- EffectDevice component with parameter controls
- EffectParameters for all 19 real-time effect types
- Device rack with horizontal scrolling (Ableton-style)
Removed offline-only effects (normalize, fadeIn, fadeOut, reverse) as they don't fit the real-time processing model.
Completed all items in Phase 7.4:
- [x] Per-track effect chain
- [x] Effect rack UI
- [x] Effect bypass per track
- [x] Real-time effect processing during playback
- [x] Add/remove effects during playback
- [x] Real-time parameter updates
- [x] Effect chain persistence
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 12:08:33 +01:00
|
|
|
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
const {
|
2025-11-17 22:17:09 +01:00
|
|
|
isPlaying,
|
|
|
|
|
currentTime,
|
|
|
|
|
duration,
|
2025-11-18 15:01:55 +01:00
|
|
|
trackLevels,
|
2025-11-18 23:56:53 +01:00
|
|
|
masterPeakLevel,
|
|
|
|
|
masterRmsLevel,
|
|
|
|
|
masterIsClipping,
|
2025-11-19 01:40:04 +01:00
|
|
|
masterAnalyser,
|
2025-11-18 23:56:53 +01:00
|
|
|
resetClipIndicator,
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
play,
|
|
|
|
|
pause,
|
|
|
|
|
stop,
|
|
|
|
|
seek,
|
2025-11-17 22:17:09 +01:00
|
|
|
togglePlayPause,
|
2025-11-18 23:29:18 +01:00
|
|
|
} = useMultiTrackPlayer(tracks, masterVolume, handleAutomationRecording);
|
|
|
|
|
|
|
|
|
|
// Reset latch triggered state when playback stops
|
|
|
|
|
React.useEffect(() => {
|
|
|
|
|
if (!isPlaying) {
|
|
|
|
|
setLatchTriggered(new Set());
|
|
|
|
|
lastRecordedValuesRef.current.clear();
|
|
|
|
|
}
|
|
|
|
|
}, [isPlaying]);
|
|
|
|
|
|
|
|
|
|
// Record effect parameter values while touched
|
|
|
|
|
React.useEffect(() => {
|
|
|
|
|
if (!isPlaying) return;
|
|
|
|
|
|
|
|
|
|
const recordEffectParams = () => {
|
|
|
|
|
const time = currentTime;
|
|
|
|
|
|
|
|
|
|
touchedParameters.forEach(paramKey => {
|
|
|
|
|
const [trackId, laneId] = paramKey.split('-');
|
|
|
|
|
const track = tracks.find(t => t.id === trackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
const lane = track.automation.lanes.find(l => l.id === laneId);
|
|
|
|
|
if (!lane || !lane.parameterId.startsWith('effect.')) return;
|
|
|
|
|
|
|
|
|
|
// Parse effect parameter ID: effect.{effectId}.{paramName}
|
|
|
|
|
const parts = lane.parameterId.split('.');
|
|
|
|
|
if (parts.length !== 3) return;
|
|
|
|
|
|
|
|
|
|
const effectId = parts[1];
|
|
|
|
|
const paramName = parts[2];
|
|
|
|
|
|
|
|
|
|
const effect = track.effectChain.effects.find(e => e.id === effectId);
|
|
|
|
|
if (!effect || !effect.parameters) return;
|
|
|
|
|
|
|
|
|
|
const currentValue = (effect.parameters as any)[paramName];
|
|
|
|
|
if (currentValue === undefined) return;
|
|
|
|
|
|
|
|
|
|
// Normalize value to 0-1 range
|
|
|
|
|
const range = lane.valueRange.max - lane.valueRange.min;
|
|
|
|
|
const normalizedValue = (currentValue - lane.valueRange.min) / range;
|
|
|
|
|
|
|
|
|
|
// Record the automation
|
|
|
|
|
handleAutomationRecording(trackId, laneId, time, normalizedValue);
|
|
|
|
|
});
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const interval = setInterval(recordEffectParams, 50); // Record every 50ms while touched
|
|
|
|
|
return () => clearInterval(interval);
|
|
|
|
|
}, [isPlaying, currentTime, touchedParameters, tracks, handleAutomationRecording]);
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
|
2025-11-18 07:30:46 +01:00
|
|
|
// Master effect chain
|
2025-11-17 20:27:08 +01:00
|
|
|
const {
|
2025-11-18 07:30:46 +01:00
|
|
|
chain: masterEffectChain,
|
|
|
|
|
presets: masterEffectPresets,
|
|
|
|
|
toggleEffectEnabled: toggleMasterEffect,
|
|
|
|
|
removeEffect: removeMasterEffect,
|
|
|
|
|
reorder: reorderMasterEffects,
|
|
|
|
|
clearChain: clearMasterChain,
|
|
|
|
|
savePreset: saveMasterPreset,
|
|
|
|
|
loadPresetToChain: loadMasterPreset,
|
|
|
|
|
deletePreset: deleteMasterPreset,
|
2025-11-17 20:27:08 +01:00
|
|
|
} = useEffectChain();
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
|
2025-11-17 21:57:31 +01:00
|
|
|
// Multi-track handlers
|
|
|
|
|
const handleImportTracks = () => {
|
|
|
|
|
setImportDialogOpen(true);
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleImportTrack = (buffer: AudioBuffer, name: string) => {
|
2025-11-18 18:13:38 +01:00
|
|
|
console.log(`[AudioEditor] handleImportTrack called: ${name}`);
|
2025-11-17 21:57:31 +01:00
|
|
|
addTrackFromBuffer(buffer, name);
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleClearTracks = () => {
|
|
|
|
|
clearTracks();
|
2025-11-17 22:17:09 +01:00
|
|
|
setSelectedTrackId(null);
|
2025-11-17 21:57:31 +01:00
|
|
|
addToast({
|
|
|
|
|
title: 'Tracks Cleared',
|
|
|
|
|
description: 'All tracks have been removed',
|
|
|
|
|
variant: 'info',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
};
|
|
|
|
|
|
2025-11-17 22:17:09 +01:00
|
|
|
const handleRemoveTrack = (trackId: string) => {
|
|
|
|
|
removeTrack(trackId);
|
|
|
|
|
if (selectedTrackId === trackId) {
|
|
|
|
|
setSelectedTrackId(null);
|
2025-11-17 20:03:40 +01:00
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
2025-11-18 07:30:46 +01:00
|
|
|
// Per-track effect chain handlers
|
|
|
|
|
const handleToggleTrackEffect = (effectId: string) => {
|
|
|
|
|
if (!selectedTrack) return;
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...selectedTrack.effectChain,
|
|
|
|
|
effects: selectedTrack.effectChain.effects.map((e) =>
|
|
|
|
|
e.id === effectId ? { ...e, enabled: !e.enabled } : e
|
|
|
|
|
),
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrack.id, { effectChain: updatedChain });
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleRemoveTrackEffect = (effectId: string) => {
|
|
|
|
|
if (!selectedTrack) return;
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...selectedTrack.effectChain,
|
|
|
|
|
effects: selectedTrack.effectChain.effects.filter((e) => e.id !== effectId),
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrack.id, { effectChain: updatedChain });
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleReorderTrackEffects = (fromIndex: number, toIndex: number) => {
|
|
|
|
|
if (!selectedTrack) return;
|
|
|
|
|
const effects = [...selectedTrack.effectChain.effects];
|
|
|
|
|
const [removed] = effects.splice(fromIndex, 1);
|
|
|
|
|
effects.splice(toIndex, 0, removed);
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...selectedTrack.effectChain,
|
|
|
|
|
effects,
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrack.id, { effectChain: updatedChain });
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleClearTrackChain = () => {
|
|
|
|
|
if (!selectedTrack) return;
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...selectedTrack.effectChain,
|
|
|
|
|
effects: [],
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrack.id, { effectChain: updatedChain });
|
|
|
|
|
};
|
|
|
|
|
|
2025-11-18 18:13:38 +01:00
|
|
|
// Effects Panel handlers
|
2025-11-18 18:18:17 +01:00
|
|
|
const handleAddEffect = React.useCallback((effectType: EffectType) => {
|
2025-11-18 18:13:38 +01:00
|
|
|
if (!selectedTrackId) return;
|
|
|
|
|
const track = tracks.find((t) => t.id === selectedTrackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
// Import createEffect and EFFECT_NAMES dynamically
|
|
|
|
|
import('@/lib/audio/effects/chain').then(({ createEffect, EFFECT_NAMES }) => {
|
|
|
|
|
const newEffect = createEffect(effectType, EFFECT_NAMES[effectType]);
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...track.effectChain,
|
|
|
|
|
effects: [...track.effectChain.effects, newEffect],
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrackId, { effectChain: updatedChain });
|
|
|
|
|
});
|
|
|
|
|
}, [selectedTrackId, tracks, updateTrack]);
|
|
|
|
|
|
|
|
|
|
const handleToggleEffect = React.useCallback((effectId: string) => {
|
|
|
|
|
if (!selectedTrackId) return;
|
|
|
|
|
const track = tracks.find((t) => t.id === selectedTrackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...track.effectChain,
|
|
|
|
|
effects: track.effectChain.effects.map((e) =>
|
|
|
|
|
e.id === effectId ? { ...e, enabled: !e.enabled } : e
|
|
|
|
|
),
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrackId, { effectChain: updatedChain });
|
|
|
|
|
}, [selectedTrackId, tracks, updateTrack]);
|
|
|
|
|
|
|
|
|
|
const handleRemoveEffect = React.useCallback((effectId: string) => {
|
|
|
|
|
if (!selectedTrackId) return;
|
|
|
|
|
const track = tracks.find((t) => t.id === selectedTrackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...track.effectChain,
|
|
|
|
|
effects: track.effectChain.effects.filter((e) => e.id !== effectId),
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrackId, { effectChain: updatedChain });
|
|
|
|
|
}, [selectedTrackId, tracks, updateTrack]);
|
|
|
|
|
|
|
|
|
|
const handleUpdateEffect = React.useCallback((effectId: string, parameters: any) => {
|
|
|
|
|
if (!selectedTrackId) return;
|
|
|
|
|
const track = tracks.find((t) => t.id === selectedTrackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...track.effectChain,
|
|
|
|
|
effects: track.effectChain.effects.map((e) =>
|
|
|
|
|
e.id === effectId ? { ...e, parameters } : e
|
|
|
|
|
),
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrackId, { effectChain: updatedChain });
|
|
|
|
|
}, [selectedTrackId, tracks, updateTrack]);
|
|
|
|
|
|
|
|
|
|
const handleToggleEffectExpanded = React.useCallback((effectId: string) => {
|
|
|
|
|
if (!selectedTrackId) return;
|
|
|
|
|
const track = tracks.find((t) => t.id === selectedTrackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
const updatedChain = {
|
|
|
|
|
...track.effectChain,
|
|
|
|
|
effects: track.effectChain.effects.map((e) =>
|
|
|
|
|
e.id === effectId ? { ...e, expanded: !e.expanded } : e
|
|
|
|
|
),
|
|
|
|
|
};
|
|
|
|
|
updateTrack(selectedTrackId, { effectChain: updatedChain });
|
|
|
|
|
}, [selectedTrackId, tracks, updateTrack]);
|
|
|
|
|
|
|
|
|
|
// Preserve effects panel state - don't auto-open/close on track selection
|
|
|
|
|
|
2025-11-18 13:05:05 +01:00
|
|
|
// Selection handler
|
|
|
|
|
const handleSelectionChange = (trackId: string, selection: { start: number; end: number } | null) => {
|
|
|
|
|
updateTrack(trackId, { selection });
|
|
|
|
|
};
|
|
|
|
|
|
2025-11-18 14:44:15 +01:00
|
|
|
// Recording handlers
|
|
|
|
|
const handleToggleRecordEnable = React.useCallback((trackId: string) => {
|
|
|
|
|
const track = tracks.find((t) => t.id === trackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
// Toggle record enable
|
|
|
|
|
updateTrack(trackId, { recordEnabled: !track.recordEnabled });
|
|
|
|
|
}, [tracks, updateTrack]);
|
|
|
|
|
|
|
|
|
|
const handleStartRecording = React.useCallback(async () => {
|
|
|
|
|
// Find first armed track
|
|
|
|
|
const armedTrack = tracks.find((t) => t.recordEnabled);
|
|
|
|
|
if (!armedTrack) {
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'No Track Armed',
|
|
|
|
|
description: 'Please arm a track for recording first',
|
|
|
|
|
variant: 'warning',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Request permission if needed
|
|
|
|
|
const hasPermission = await requestPermission();
|
|
|
|
|
if (!hasPermission) {
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Microphone Access Denied',
|
|
|
|
|
description: 'Please allow microphone access to record',
|
|
|
|
|
variant: 'error',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
try {
|
|
|
|
|
await startRecording();
|
|
|
|
|
setRecordingTrackId(armedTrack.id);
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Recording Started',
|
|
|
|
|
description: `Recording to ${armedTrack.name}`,
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
} catch (error) {
|
|
|
|
|
console.error('Failed to start recording:', error);
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Recording Failed',
|
|
|
|
|
description: 'Failed to start recording',
|
|
|
|
|
variant: 'error',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
}
|
|
|
|
|
}, [tracks, startRecording, requestPermission, addToast]);
|
|
|
|
|
|
|
|
|
|
const handleStopRecording = React.useCallback(async () => {
|
|
|
|
|
if (!recordingTrackId) return;
|
|
|
|
|
|
|
|
|
|
try {
|
2025-11-18 15:44:13 +01:00
|
|
|
const recordedBuffer = await stopRecording();
|
|
|
|
|
|
|
|
|
|
if (recordedBuffer) {
|
|
|
|
|
const track = tracks.find((t) => t.id === recordingTrackId);
|
|
|
|
|
|
|
|
|
|
// Check if overdub mode is enabled and track has existing audio
|
|
|
|
|
if (overdubEnabled && track?.audioBuffer) {
|
|
|
|
|
// Mix recorded audio with existing audio
|
|
|
|
|
const audioContext = new AudioContext();
|
|
|
|
|
const existingBuffer = track.audioBuffer;
|
|
|
|
|
|
|
|
|
|
// Create a new buffer that's long enough for both
|
|
|
|
|
const maxDuration = Math.max(existingBuffer.duration, recordedBuffer.duration);
|
|
|
|
|
const maxChannels = Math.max(existingBuffer.numberOfChannels, recordedBuffer.numberOfChannels);
|
|
|
|
|
const mixedBuffer = audioContext.createBuffer(
|
|
|
|
|
maxChannels,
|
|
|
|
|
Math.floor(maxDuration * existingBuffer.sampleRate),
|
|
|
|
|
existingBuffer.sampleRate
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
// Mix each channel
|
|
|
|
|
for (let channel = 0; channel < maxChannels; channel++) {
|
|
|
|
|
const mixedData = mixedBuffer.getChannelData(channel);
|
|
|
|
|
const existingData = channel < existingBuffer.numberOfChannels
|
|
|
|
|
? existingBuffer.getChannelData(channel)
|
|
|
|
|
: new Float32Array(mixedData.length);
|
|
|
|
|
const recordedData = channel < recordedBuffer.numberOfChannels
|
|
|
|
|
? recordedBuffer.getChannelData(channel)
|
|
|
|
|
: new Float32Array(mixedData.length);
|
|
|
|
|
|
|
|
|
|
// Mix the samples (average them to avoid clipping)
|
|
|
|
|
for (let i = 0; i < mixedData.length; i++) {
|
|
|
|
|
const existingSample = i < existingData.length ? existingData[i] : 0;
|
|
|
|
|
const recordedSample = i < recordedData.length ? recordedData[i] : 0;
|
|
|
|
|
mixedData[i] = (existingSample + recordedSample) / 2;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
updateTrack(recordingTrackId, { audioBuffer: mixedBuffer });
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Recording Complete (Overdub)',
|
|
|
|
|
description: `Mixed ${recordedBuffer.duration.toFixed(2)}s with existing audio`,
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
} else {
|
|
|
|
|
// Normal mode - replace existing audio
|
|
|
|
|
updateTrack(recordingTrackId, { audioBuffer: recordedBuffer });
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Recording Complete',
|
|
|
|
|
description: `Recorded ${recordedBuffer.duration.toFixed(2)}s of audio`,
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
}
|
2025-11-18 14:44:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
setRecordingTrackId(null);
|
|
|
|
|
} catch (error) {
|
|
|
|
|
console.error('Failed to stop recording:', error);
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Recording Error',
|
|
|
|
|
description: 'Failed to save recording',
|
|
|
|
|
variant: 'error',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
setRecordingTrackId(null);
|
|
|
|
|
}
|
2025-11-18 15:44:13 +01:00
|
|
|
}, [recordingTrackId, stopRecording, updateTrack, addToast, overdubEnabled, tracks]);
|
2025-11-18 14:44:15 +01:00
|
|
|
|
2025-11-18 13:05:05 +01:00
|
|
|
// Edit handlers
|
|
|
|
|
const handleCut = React.useCallback(() => {
|
|
|
|
|
const track = tracks.find((t) => t.selection);
|
|
|
|
|
if (!track || !track.audioBuffer || !track.selection) return;
|
|
|
|
|
|
|
|
|
|
// Extract to clipboard
|
|
|
|
|
const extracted = extractBufferSegment(
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
track.selection.start,
|
|
|
|
|
track.selection.end
|
|
|
|
|
);
|
|
|
|
|
setClipboard(extracted);
|
|
|
|
|
|
|
|
|
|
// Execute cut command
|
|
|
|
|
const command = createMultiTrackCutCommand(
|
|
|
|
|
track.id,
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
track.selection,
|
|
|
|
|
(trackId, buffer, selection) => {
|
|
|
|
|
updateTrack(trackId, { audioBuffer: buffer, selection });
|
|
|
|
|
}
|
|
|
|
|
);
|
|
|
|
|
executeCommand(command);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Cut',
|
|
|
|
|
description: 'Selection cut to clipboard',
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
}, [tracks, executeCommand, updateTrack, addToast]);
|
|
|
|
|
|
|
|
|
|
const handleCopy = React.useCallback(() => {
|
|
|
|
|
const track = tracks.find((t) => t.selection);
|
|
|
|
|
if (!track || !track.audioBuffer || !track.selection) return;
|
|
|
|
|
|
|
|
|
|
// Extract to clipboard
|
|
|
|
|
const extracted = extractBufferSegment(
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
track.selection.start,
|
|
|
|
|
track.selection.end
|
|
|
|
|
);
|
|
|
|
|
setClipboard(extracted);
|
|
|
|
|
|
|
|
|
|
// Execute copy command (doesn't modify buffer, just for undo history)
|
|
|
|
|
const command = createMultiTrackCopyCommand(
|
|
|
|
|
track.id,
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
track.selection,
|
|
|
|
|
(trackId, buffer, selection) => {
|
|
|
|
|
updateTrack(trackId, { audioBuffer: buffer, selection });
|
|
|
|
|
}
|
|
|
|
|
);
|
|
|
|
|
executeCommand(command);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Copy',
|
|
|
|
|
description: 'Selection copied to clipboard',
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
}, [tracks, executeCommand, updateTrack, addToast]);
|
|
|
|
|
|
|
|
|
|
const handlePaste = React.useCallback(() => {
|
|
|
|
|
if (!clipboard || !selectedTrackId) return;
|
|
|
|
|
|
|
|
|
|
const track = tracks.find((t) => t.id === selectedTrackId);
|
|
|
|
|
if (!track) return;
|
|
|
|
|
|
|
|
|
|
// Paste at current time or at end of buffer
|
|
|
|
|
const pastePosition = currentTime || track.audioBuffer?.duration || 0;
|
|
|
|
|
|
|
|
|
|
const command = createMultiTrackPasteCommand(
|
|
|
|
|
track.id,
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
clipboard,
|
|
|
|
|
pastePosition,
|
|
|
|
|
(trackId, buffer, selection) => {
|
|
|
|
|
updateTrack(trackId, { audioBuffer: buffer, selection });
|
|
|
|
|
}
|
|
|
|
|
);
|
|
|
|
|
executeCommand(command);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Paste',
|
|
|
|
|
description: 'Clipboard content pasted',
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
}, [clipboard, selectedTrackId, tracks, currentTime, executeCommand, updateTrack, addToast]);
|
|
|
|
|
|
|
|
|
|
const handleDelete = React.useCallback(() => {
|
|
|
|
|
const track = tracks.find((t) => t.selection);
|
|
|
|
|
if (!track || !track.audioBuffer || !track.selection) return;
|
|
|
|
|
|
|
|
|
|
const command = createMultiTrackDeleteCommand(
|
|
|
|
|
track.id,
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
track.selection,
|
|
|
|
|
(trackId, buffer, selection) => {
|
|
|
|
|
updateTrack(trackId, { audioBuffer: buffer, selection });
|
|
|
|
|
}
|
|
|
|
|
);
|
|
|
|
|
executeCommand(command);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Delete',
|
|
|
|
|
description: 'Selection deleted',
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
}, [tracks, executeCommand, updateTrack, addToast]);
|
|
|
|
|
|
|
|
|
|
const handleDuplicate = React.useCallback(() => {
|
|
|
|
|
const track = tracks.find((t) => t.selection);
|
|
|
|
|
if (!track || !track.audioBuffer || !track.selection) return;
|
|
|
|
|
|
|
|
|
|
const command = createMultiTrackDuplicateCommand(
|
|
|
|
|
track.id,
|
|
|
|
|
track.audioBuffer,
|
|
|
|
|
track.selection,
|
|
|
|
|
(trackId, buffer, selection) => {
|
|
|
|
|
updateTrack(trackId, { audioBuffer: buffer, selection });
|
|
|
|
|
}
|
|
|
|
|
);
|
|
|
|
|
executeCommand(command);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Duplicate',
|
|
|
|
|
description: 'Selection duplicated',
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 2000,
|
|
|
|
|
});
|
|
|
|
|
}, [tracks, executeCommand, updateTrack, addToast]);
|
|
|
|
|
|
2025-11-18 23:29:18 +01:00
|
|
|
// Export handler
|
|
|
|
|
const handleExport = React.useCallback(async (settings: ExportSettings) => {
|
|
|
|
|
if (tracks.length === 0) {
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'No Tracks',
|
|
|
|
|
description: 'Add some tracks before exporting',
|
|
|
|
|
variant: 'warning',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
setIsExporting(true);
|
|
|
|
|
|
|
|
|
|
try {
|
|
|
|
|
const sampleRate = tracks[0]?.audioBuffer?.sampleRate || 44100;
|
|
|
|
|
|
2025-11-19 07:47:56 +01:00
|
|
|
// Helper function to convert and download a buffer
|
|
|
|
|
const convertAndDownload = async (buffer: AudioBuffer, filename: string) => {
|
|
|
|
|
let exportedBuffer: ArrayBuffer;
|
|
|
|
|
let mimeType: string;
|
|
|
|
|
let fileExtension: string;
|
|
|
|
|
|
|
|
|
|
if (settings.format === 'mp3') {
|
|
|
|
|
exportedBuffer = await audioBufferToMp3(buffer, {
|
|
|
|
|
format: 'mp3',
|
|
|
|
|
bitrate: settings.bitrate,
|
|
|
|
|
normalize: settings.normalize,
|
|
|
|
|
});
|
|
|
|
|
mimeType = 'audio/mpeg';
|
|
|
|
|
fileExtension = 'mp3';
|
|
|
|
|
} else {
|
2025-11-19 09:08:17 +01:00
|
|
|
// WAV export
|
2025-11-19 07:47:56 +01:00
|
|
|
exportedBuffer = audioBufferToWav(buffer, {
|
|
|
|
|
format: 'wav',
|
|
|
|
|
bitDepth: settings.bitDepth,
|
|
|
|
|
normalize: settings.normalize,
|
|
|
|
|
});
|
|
|
|
|
mimeType = 'audio/wav';
|
|
|
|
|
fileExtension = 'wav';
|
|
|
|
|
}
|
2025-11-18 23:29:18 +01:00
|
|
|
|
2025-11-19 07:47:56 +01:00
|
|
|
const fullFilename = `${filename}.${fileExtension}`;
|
|
|
|
|
downloadArrayBuffer(exportedBuffer, fullFilename, mimeType);
|
|
|
|
|
return fullFilename;
|
|
|
|
|
};
|
2025-11-19 02:14:32 +01:00
|
|
|
|
2025-11-19 07:47:56 +01:00
|
|
|
if (settings.scope === 'tracks') {
|
|
|
|
|
// Export each track individually
|
|
|
|
|
let exportedCount = 0;
|
|
|
|
|
for (const track of tracks) {
|
|
|
|
|
if (!track.audioBuffer) continue;
|
|
|
|
|
|
|
|
|
|
const trackFilename = `${settings.filename}_${track.name.replace(/[^a-z0-9]/gi, '_')}`;
|
|
|
|
|
await convertAndDownload(track.audioBuffer, trackFilename);
|
|
|
|
|
exportedCount++;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Export Complete',
|
|
|
|
|
description: `Exported ${exportedCount} track${exportedCount !== 1 ? 's' : ''}`,
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 3000,
|
2025-11-19 02:14:32 +01:00
|
|
|
});
|
2025-11-19 07:47:56 +01:00
|
|
|
} else if (settings.scope === 'selection') {
|
|
|
|
|
// Export selected region
|
|
|
|
|
const selectedTrack = tracks.find(t => t.selection);
|
|
|
|
|
if (!selectedTrack || !selectedTrack.selection) {
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'No Selection',
|
|
|
|
|
description: 'No region selected for export',
|
|
|
|
|
variant: 'warning',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
setIsExporting(false);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Extract selection from all tracks and mix
|
|
|
|
|
const selectionStart = selectedTrack.selection.start;
|
|
|
|
|
const selectionEnd = selectedTrack.selection.end;
|
|
|
|
|
const selectionDuration = selectionEnd - selectionStart;
|
|
|
|
|
|
|
|
|
|
// Create tracks with only the selected region
|
|
|
|
|
const selectedTracks = tracks.map(track => ({
|
|
|
|
|
...track,
|
|
|
|
|
audioBuffer: track.audioBuffer
|
|
|
|
|
? extractBufferSegment(track.audioBuffer, selectionStart, selectionEnd)
|
|
|
|
|
: null,
|
|
|
|
|
}));
|
|
|
|
|
|
|
|
|
|
const mixedBuffer = mixTracks(selectedTracks, sampleRate, selectionDuration);
|
|
|
|
|
const filename = await convertAndDownload(mixedBuffer, settings.filename);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Export Complete',
|
|
|
|
|
description: `Exported ${filename}`,
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 3000,
|
2025-11-19 02:14:32 +01:00
|
|
|
});
|
|
|
|
|
} else {
|
2025-11-19 07:47:56 +01:00
|
|
|
// Export entire project (mix all tracks)
|
|
|
|
|
const maxDuration = getMaxTrackDuration(tracks);
|
|
|
|
|
const mixedBuffer = mixTracks(tracks, sampleRate, maxDuration);
|
|
|
|
|
const filename = await convertAndDownload(mixedBuffer, settings.filename);
|
|
|
|
|
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Export Complete',
|
|
|
|
|
description: `Exported ${filename}`,
|
|
|
|
|
variant: 'success',
|
|
|
|
|
duration: 3000,
|
2025-11-19 02:14:32 +01:00
|
|
|
});
|
|
|
|
|
}
|
2025-11-18 23:29:18 +01:00
|
|
|
|
|
|
|
|
setExportDialogOpen(false);
|
|
|
|
|
} catch (error) {
|
|
|
|
|
console.error('Export failed:', error);
|
|
|
|
|
addToast({
|
|
|
|
|
title: 'Export Failed',
|
|
|
|
|
description: 'Failed to export audio',
|
|
|
|
|
variant: 'error',
|
|
|
|
|
duration: 3000,
|
|
|
|
|
});
|
|
|
|
|
} finally {
|
|
|
|
|
setIsExporting(false);
|
|
|
|
|
}
|
|
|
|
|
}, [tracks, addToast]);
|
|
|
|
|
|
2025-11-17 15:44:29 +01:00
|
|
|
// Zoom controls
|
|
|
|
|
const handleZoomIn = () => {
|
|
|
|
|
setZoom((prev) => Math.min(20, prev + 1));
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleZoomOut = () => {
|
|
|
|
|
setZoom((prev) => Math.max(1, prev - 1));
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
const handleFitToView = () => {
|
|
|
|
|
setZoom(1);
|
|
|
|
|
};
|
|
|
|
|
|
2025-11-18 07:11:43 +01:00
|
|
|
// Keyboard shortcuts
|
|
|
|
|
React.useEffect(() => {
|
|
|
|
|
const handleKeyDown = (e: KeyboardEvent) => {
|
2025-11-18 07:14:10 +01:00
|
|
|
// Spacebar for play/pause - only if not interacting with form elements
|
|
|
|
|
if (e.code === 'Space') {
|
|
|
|
|
const target = e.target as HTMLElement;
|
|
|
|
|
// Don't trigger if user is typing or interacting with buttons/form elements
|
|
|
|
|
if (
|
|
|
|
|
target instanceof HTMLInputElement ||
|
|
|
|
|
target instanceof HTMLTextAreaElement ||
|
|
|
|
|
target instanceof HTMLButtonElement ||
|
|
|
|
|
target.getAttribute('role') === 'button'
|
|
|
|
|
) {
|
|
|
|
|
return;
|
|
|
|
|
}
|
2025-11-18 07:11:43 +01:00
|
|
|
e.preventDefault();
|
|
|
|
|
togglePlayPause();
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
window.addEventListener('keydown', handleKeyDown);
|
|
|
|
|
return () => window.removeEventListener('keydown', handleKeyDown);
|
|
|
|
|
}, [togglePlayPause]);
|
|
|
|
|
|
2025-11-17 22:17:09 +01:00
|
|
|
// Find selected track
|
|
|
|
|
const selectedTrack = tracks.find((t) => t.id === selectedTrackId);
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
|
2025-11-17 20:03:40 +01:00
|
|
|
// Command palette actions
|
|
|
|
|
const commandActions: CommandAction[] = React.useMemo(() => {
|
|
|
|
|
const actions: CommandAction[] = [
|
|
|
|
|
// Playback
|
|
|
|
|
{
|
|
|
|
|
id: 'play',
|
|
|
|
|
label: 'Play',
|
|
|
|
|
description: 'Start playback',
|
|
|
|
|
shortcut: 'Space',
|
|
|
|
|
category: 'playback',
|
|
|
|
|
action: play,
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
id: 'pause',
|
|
|
|
|
label: 'Pause',
|
|
|
|
|
description: 'Pause playback',
|
|
|
|
|
shortcut: 'Space',
|
|
|
|
|
category: 'playback',
|
|
|
|
|
action: pause,
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
id: 'stop',
|
|
|
|
|
label: 'Stop',
|
|
|
|
|
description: 'Stop playback',
|
|
|
|
|
category: 'playback',
|
|
|
|
|
action: stop,
|
|
|
|
|
},
|
|
|
|
|
// View
|
|
|
|
|
{
|
|
|
|
|
id: 'zoom-in',
|
|
|
|
|
label: 'Zoom In',
|
2025-11-17 22:17:09 +01:00
|
|
|
description: 'Zoom in on waveforms',
|
2025-11-17 20:03:40 +01:00
|
|
|
category: 'view',
|
|
|
|
|
action: handleZoomIn,
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
id: 'zoom-out',
|
|
|
|
|
label: 'Zoom Out',
|
2025-11-17 22:17:09 +01:00
|
|
|
description: 'Zoom out on waveforms',
|
2025-11-17 20:03:40 +01:00
|
|
|
category: 'view',
|
|
|
|
|
action: handleZoomOut,
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|
id: 'fit-to-view',
|
|
|
|
|
label: 'Fit to View',
|
2025-11-17 22:17:09 +01:00
|
|
|
description: 'Reset zoom to fit all tracks',
|
2025-11-17 20:03:40 +01:00
|
|
|
category: 'view',
|
|
|
|
|
action: handleFitToView,
|
|
|
|
|
},
|
2025-11-17 22:17:09 +01:00
|
|
|
// Tracks
|
2025-11-17 20:03:40 +01:00
|
|
|
{
|
2025-11-17 22:17:09 +01:00
|
|
|
id: 'add-track',
|
|
|
|
|
label: 'Add Empty Track',
|
|
|
|
|
description: 'Create a new empty track',
|
|
|
|
|
category: 'tracks',
|
|
|
|
|
action: () => addTrack(),
|
2025-11-17 20:03:40 +01:00
|
|
|
},
|
|
|
|
|
{
|
2025-11-17 22:17:09 +01:00
|
|
|
id: 'import-tracks',
|
|
|
|
|
label: 'Import Audio Files',
|
|
|
|
|
description: 'Import multiple audio files as tracks',
|
|
|
|
|
category: 'tracks',
|
|
|
|
|
action: handleImportTracks,
|
2025-11-17 20:03:40 +01:00
|
|
|
},
|
|
|
|
|
{
|
2025-11-17 22:17:09 +01:00
|
|
|
id: 'clear-tracks',
|
|
|
|
|
label: 'Clear All Tracks',
|
|
|
|
|
description: 'Remove all tracks',
|
|
|
|
|
category: 'tracks',
|
|
|
|
|
action: handleClearTracks,
|
2025-11-17 20:03:40 +01:00
|
|
|
},
|
|
|
|
|
];
|
|
|
|
|
return actions;
|
2025-11-17 22:17:09 +01:00
|
|
|
}, [play, pause, stop, handleZoomIn, handleZoomOut, handleFitToView, handleImportTracks, handleClearTracks, addTrack]);
|
|
|
|
|
|
|
|
|
|
// Keyboard shortcuts
|
|
|
|
|
React.useEffect(() => {
|
|
|
|
|
const handleKeyDown = (e: KeyboardEvent) => {
|
|
|
|
|
// Prevent shortcuts if typing in an input
|
|
|
|
|
const isTyping = e.target instanceof HTMLInputElement || e.target instanceof HTMLTextAreaElement;
|
|
|
|
|
|
|
|
|
|
// Spacebar: Play/Pause (always, unless typing in an input)
|
|
|
|
|
if (e.code === 'Space' && !isTyping) {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
togglePlayPause();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (isTyping) return;
|
|
|
|
|
|
2025-11-18 13:05:05 +01:00
|
|
|
// Ctrl/Cmd+Z: Undo
|
|
|
|
|
if ((e.ctrlKey || e.metaKey) && e.key === 'z' && !e.shiftKey) {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
if (canUndo) {
|
|
|
|
|
undo();
|
|
|
|
|
}
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Ctrl/Cmd+Shift+Z or Ctrl/Cmd+Y: Redo
|
|
|
|
|
if (((e.ctrlKey || e.metaKey) && e.key === 'z' && e.shiftKey) || ((e.ctrlKey || e.metaKey) && e.key === 'y')) {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
if (canRedo) {
|
|
|
|
|
redo();
|
|
|
|
|
}
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Ctrl/Cmd+X: Cut
|
|
|
|
|
if ((e.ctrlKey || e.metaKey) && e.key === 'x') {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
handleCut();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Ctrl/Cmd+C: Copy
|
|
|
|
|
if ((e.ctrlKey || e.metaKey) && e.key === 'c') {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
handleCopy();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Ctrl/Cmd+V: Paste
|
|
|
|
|
if ((e.ctrlKey || e.metaKey) && e.key === 'v') {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
handlePaste();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Ctrl/Cmd+D: Duplicate
|
|
|
|
|
if ((e.ctrlKey || e.metaKey) && e.key === 'd') {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
handleDuplicate();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Delete or Backspace: Delete selection
|
|
|
|
|
if (e.key === 'Delete' || e.key === 'Backspace') {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
handleDelete();
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2025-11-17 22:17:09 +01:00
|
|
|
// Escape: Clear selection
|
|
|
|
|
if (e.key === 'Escape') {
|
|
|
|
|
e.preventDefault();
|
|
|
|
|
setSelectedTrackId(null);
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
window.addEventListener('keydown', handleKeyDown);
|
|
|
|
|
return () => window.removeEventListener('keydown', handleKeyDown);
|
2025-11-18 13:05:05 +01:00
|
|
|
}, [togglePlayPause, canUndo, canRedo, undo, redo, handleCut, handleCopy, handlePaste, handleDelete, handleDuplicate]);
|
2025-11-17 20:03:40 +01:00
|
|
|
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
return (
|
2025-11-17 20:03:40 +01:00
|
|
|
<>
|
|
|
|
|
{/* Compact Header */}
|
|
|
|
|
<header className="flex items-center justify-between px-4 py-2 border-b border-border bg-card flex-shrink-0 gap-4">
|
|
|
|
|
{/* Left: Logo */}
|
2025-11-18 07:46:27 +01:00
|
|
|
<div className="flex items-center gap-4 flex-shrink-0">
|
|
|
|
|
<div className="flex items-center gap-2">
|
|
|
|
|
<Music className="h-5 w-5 text-primary" />
|
|
|
|
|
<h1 className="text-lg font-bold text-foreground">Audio UI</h1>
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
{/* Track Actions */}
|
|
|
|
|
<div className="flex items-center gap-2 border-l border-border pl-4">
|
feat: complete Phase 7.4 - real-time track effects system
Implemented comprehensive real-time effect processing for multi-track audio:
Core Features:
- Per-track effect chains with drag-and-drop reordering
- Effect bypass/enable toggle per effect
- Real-time parameter updates (filters, dynamics, time-based, distortion, bitcrusher, pitch, timestretch)
- Add/remove effects during playback without interruption
- Effect chain persistence via localStorage
- Automatic playback stop when tracks are deleted
Technical Implementation:
- Effect processor with dry/wet routing for bypass functionality
- Real-time effect parameter updates using AudioParam setValueAtTime
- Structure change detection for add/remove/reorder operations
- Stale closure fix using refs for latest track state
- ScriptProcessorNode for bitcrusher, pitch shifter, and time stretch
- Dual-tap delay line for pitch shifting
- Overlap-add synthesis for time stretching
UI Components:
- EffectBrowser dialog with categorized effects
- EffectDevice component with parameter controls
- EffectParameters for all 19 real-time effect types
- Device rack with horizontal scrolling (Ableton-style)
Removed offline-only effects (normalize, fadeIn, fadeOut, reverse) as they don't fit the real-time processing model.
Completed all items in Phase 7.4:
- [x] Per-track effect chain
- [x] Effect rack UI
- [x] Effect bypass per track
- [x] Real-time effect processing during playback
- [x] Add/remove effects during playback
- [x] Real-time parameter updates
- [x] Effect chain persistence
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 12:08:33 +01:00
|
|
|
<Button variant="outline" size="sm" onClick={() => addTrack()}>
|
2025-11-18 07:46:27 +01:00
|
|
|
<Plus className="h-4 w-4 mr-1.5" />
|
|
|
|
|
Add Track
|
|
|
|
|
</Button>
|
|
|
|
|
<Button variant="outline" size="sm" onClick={handleImportTracks}>
|
|
|
|
|
<Upload className="h-4 w-4 mr-1.5" />
|
|
|
|
|
Import
|
|
|
|
|
</Button>
|
|
|
|
|
{tracks.length > 0 && (
|
2025-11-18 23:29:18 +01:00
|
|
|
<>
|
|
|
|
|
<Button variant="outline" size="sm" onClick={() => setExportDialogOpen(true)}>
|
|
|
|
|
<Download className="h-4 w-4 mr-1.5" />
|
|
|
|
|
Export
|
|
|
|
|
</Button>
|
|
|
|
|
<Button variant="outline" size="sm" onClick={handleClearTracks}>
|
|
|
|
|
<Trash2 className="h-4 w-4 mr-1.5 text-destructive" />
|
|
|
|
|
Clear All
|
|
|
|
|
</Button>
|
|
|
|
|
</>
|
2025-11-18 07:46:27 +01:00
|
|
|
)}
|
|
|
|
|
</div>
|
2025-11-17 20:03:40 +01:00
|
|
|
</div>
|
|
|
|
|
|
2025-11-18 16:15:04 +01:00
|
|
|
{/* Right: Command Palette + Settings + Theme Toggle */}
|
2025-11-17 20:03:40 +01:00
|
|
|
<div className="flex items-center gap-2 flex-shrink-0">
|
|
|
|
|
<CommandPalette actions={commandActions} />
|
2025-11-18 16:15:04 +01:00
|
|
|
<Button
|
|
|
|
|
variant="ghost"
|
|
|
|
|
size="icon"
|
|
|
|
|
onClick={() => setSettingsDialogOpen(true)}
|
|
|
|
|
title="Settings"
|
|
|
|
|
>
|
|
|
|
|
<Settings className="h-5 w-5" />
|
|
|
|
|
</Button>
|
2025-11-17 20:03:40 +01:00
|
|
|
<ThemeToggle />
|
|
|
|
|
</div>
|
|
|
|
|
</header>
|
|
|
|
|
|
|
|
|
|
{/* Main content area */}
|
|
|
|
|
<div className="flex flex-1 overflow-hidden">
|
|
|
|
|
{/* Main canvas area */}
|
|
|
|
|
<main className="flex-1 flex flex-col overflow-hidden bg-background">
|
2025-11-17 22:17:09 +01:00
|
|
|
{/* Multi-Track View */}
|
|
|
|
|
<div className="flex-1 flex flex-col overflow-hidden">
|
|
|
|
|
<TrackList
|
|
|
|
|
tracks={tracks}
|
|
|
|
|
zoom={zoom}
|
|
|
|
|
currentTime={currentTime}
|
|
|
|
|
duration={duration}
|
|
|
|
|
selectedTrackId={selectedTrackId}
|
|
|
|
|
onSelectTrack={setSelectedTrackId}
|
|
|
|
|
onAddTrack={addTrack}
|
|
|
|
|
onImportTrack={handleImportTrack}
|
|
|
|
|
onRemoveTrack={handleRemoveTrack}
|
|
|
|
|
onUpdateTrack={updateTrack}
|
|
|
|
|
onSeek={seek}
|
2025-11-18 13:05:05 +01:00
|
|
|
onSelectionChange={handleSelectionChange}
|
2025-11-18 14:44:15 +01:00
|
|
|
onToggleRecordEnable={handleToggleRecordEnable}
|
|
|
|
|
recordingTrackId={recordingTrackId}
|
|
|
|
|
recordingLevel={recordingState.inputLevel}
|
2025-11-18 15:01:55 +01:00
|
|
|
trackLevels={trackLevels}
|
2025-11-18 23:29:18 +01:00
|
|
|
onParameterTouched={setParameterTouched}
|
|
|
|
|
isPlaying={isPlaying}
|
2025-11-17 22:17:09 +01:00
|
|
|
/>
|
|
|
|
|
</div>
|
2025-11-17 20:03:40 +01:00
|
|
|
</main>
|
2025-11-19 00:22:52 +01:00
|
|
|
|
2025-11-19 01:40:04 +01:00
|
|
|
{/* Right Sidebar - Master Controls & Analyzers */}
|
|
|
|
|
<aside className="flex-shrink-0 border-l border-border bg-card flex flex-col p-4 gap-4 w-[280px]">
|
|
|
|
|
{/* Master Controls */}
|
|
|
|
|
<div className="flex items-center justify-center">
|
|
|
|
|
<MasterControls
|
|
|
|
|
volume={masterVolume}
|
|
|
|
|
pan={masterPan}
|
|
|
|
|
peakLevel={masterPeakLevel}
|
|
|
|
|
rmsLevel={masterRmsLevel}
|
|
|
|
|
isClipping={masterIsClipping}
|
|
|
|
|
isMuted={isMasterMuted}
|
|
|
|
|
onVolumeChange={setMasterVolume}
|
|
|
|
|
onPanChange={setMasterPan}
|
|
|
|
|
onMuteToggle={() => {
|
|
|
|
|
if (isMasterMuted) {
|
|
|
|
|
setMasterVolume(0.8);
|
|
|
|
|
setIsMasterMuted(false);
|
|
|
|
|
} else {
|
|
|
|
|
setMasterVolume(0);
|
|
|
|
|
setIsMasterMuted(true);
|
|
|
|
|
}
|
|
|
|
|
}}
|
|
|
|
|
onResetClip={resetClipIndicator}
|
|
|
|
|
/>
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
{/* Analyzer Toggle */}
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
<div className="grid grid-cols-5 gap-0.5 bg-muted/20 border border-border/50 rounded-md p-0.5 max-w-[192px] mx-auto">
|
2025-11-19 01:40:04 +01:00
|
|
|
<button
|
|
|
|
|
onClick={() => setAnalyzerView('frequency')}
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
className={`px-1 py-1 rounded text-[9px] font-bold uppercase tracking-wider transition-all ${
|
2025-11-19 01:40:04 +01:00
|
|
|
analyzerView === 'frequency'
|
|
|
|
|
? 'bg-accent text-accent-foreground shadow-sm'
|
|
|
|
|
: 'text-muted-foreground hover:text-foreground'
|
|
|
|
|
}`}
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
title="Frequency Analyzer"
|
2025-11-19 01:40:04 +01:00
|
|
|
>
|
|
|
|
|
FFT
|
|
|
|
|
</button>
|
|
|
|
|
<button
|
|
|
|
|
onClick={() => setAnalyzerView('spectrogram')}
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
className={`px-1 py-1 rounded text-[9px] font-bold uppercase tracking-wider transition-all ${
|
2025-11-19 01:40:04 +01:00
|
|
|
analyzerView === 'spectrogram'
|
|
|
|
|
? 'bg-accent text-accent-foreground shadow-sm'
|
|
|
|
|
: 'text-muted-foreground hover:text-foreground'
|
|
|
|
|
}`}
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
title="Spectrogram"
|
2025-11-19 01:40:04 +01:00
|
|
|
>
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
SPEC
|
|
|
|
|
</button>
|
|
|
|
|
<button
|
|
|
|
|
onClick={() => setAnalyzerView('phase')}
|
|
|
|
|
className={`px-1 py-1 rounded text-[9px] font-bold uppercase tracking-wider transition-all ${
|
|
|
|
|
analyzerView === 'phase'
|
|
|
|
|
? 'bg-accent text-accent-foreground shadow-sm'
|
|
|
|
|
: 'text-muted-foreground hover:text-foreground'
|
|
|
|
|
}`}
|
|
|
|
|
title="Phase Correlation"
|
|
|
|
|
>
|
|
|
|
|
PHS
|
|
|
|
|
</button>
|
|
|
|
|
<button
|
|
|
|
|
onClick={() => setAnalyzerView('lufs')}
|
|
|
|
|
className={`px-1 py-1 rounded text-[9px] font-bold uppercase tracking-wider transition-all ${
|
|
|
|
|
analyzerView === 'lufs'
|
|
|
|
|
? 'bg-accent text-accent-foreground shadow-sm'
|
|
|
|
|
: 'text-muted-foreground hover:text-foreground'
|
|
|
|
|
}`}
|
|
|
|
|
title="LUFS Loudness"
|
|
|
|
|
>
|
|
|
|
|
LUFS
|
|
|
|
|
</button>
|
|
|
|
|
<button
|
|
|
|
|
onClick={() => setAnalyzerView('stats')}
|
|
|
|
|
className={`px-1 py-1 rounded text-[9px] font-bold uppercase tracking-wider transition-all ${
|
|
|
|
|
analyzerView === 'stats'
|
|
|
|
|
? 'bg-accent text-accent-foreground shadow-sm'
|
|
|
|
|
: 'text-muted-foreground hover:text-foreground'
|
|
|
|
|
}`}
|
|
|
|
|
title="Audio Statistics"
|
|
|
|
|
>
|
|
|
|
|
INFO
|
2025-11-19 01:40:04 +01:00
|
|
|
</button>
|
|
|
|
|
</div>
|
|
|
|
|
|
|
|
|
|
{/* Analyzer Display */}
|
2025-11-19 01:43:45 +01:00
|
|
|
<div className="flex-1 min-h-[360px] flex items-start justify-center">
|
|
|
|
|
<div className="w-[192px]">
|
feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:
**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues
**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)
**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom
**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
|
|
|
{analyzerView === 'frequency' && <FrequencyAnalyzer analyserNode={masterAnalyser} />}
|
|
|
|
|
{analyzerView === 'spectrogram' && <Spectrogram analyserNode={masterAnalyser} />}
|
|
|
|
|
{analyzerView === 'phase' && <PhaseCorrelationMeter analyserNode={masterAnalyser} />}
|
|
|
|
|
{analyzerView === 'lufs' && <LUFSMeter analyserNode={masterAnalyser} />}
|
|
|
|
|
{analyzerView === 'stats' && <AudioStatistics tracks={tracks} />}
|
2025-11-19 01:42:25 +01:00
|
|
|
</div>
|
2025-11-19 01:40:04 +01:00
|
|
|
</div>
|
2025-11-19 00:22:52 +01:00
|
|
|
</aside>
|
2025-11-17 20:03:40 +01:00
|
|
|
</div>
|
|
|
|
|
|
2025-11-18 08:43:38 +01:00
|
|
|
{/* Transport Controls */}
|
|
|
|
|
<div className="border-t border-border bg-card p-3 flex justify-center">
|
|
|
|
|
<PlaybackControls
|
|
|
|
|
isPlaying={isPlaying}
|
|
|
|
|
isPaused={!isPlaying}
|
|
|
|
|
currentTime={currentTime}
|
|
|
|
|
duration={duration}
|
|
|
|
|
volume={masterVolume}
|
|
|
|
|
onPlay={play}
|
|
|
|
|
onPause={pause}
|
|
|
|
|
onStop={stop}
|
|
|
|
|
onSeek={seek}
|
|
|
|
|
onVolumeChange={setMasterVolume}
|
|
|
|
|
currentTimeFormatted={formatDuration(currentTime)}
|
|
|
|
|
durationFormatted={formatDuration(duration)}
|
2025-11-18 14:44:15 +01:00
|
|
|
isRecording={recordingState.isRecording}
|
|
|
|
|
onStartRecording={handleStartRecording}
|
|
|
|
|
onStopRecording={handleStopRecording}
|
2025-11-18 15:44:13 +01:00
|
|
|
punchInEnabled={punchInEnabled}
|
|
|
|
|
punchInTime={punchInTime}
|
|
|
|
|
punchOutTime={punchOutTime}
|
|
|
|
|
onPunchInEnabledChange={setPunchInEnabled}
|
|
|
|
|
onPunchInTimeChange={setPunchInTime}
|
|
|
|
|
onPunchOutTimeChange={setPunchOutTime}
|
|
|
|
|
overdubEnabled={overdubEnabled}
|
|
|
|
|
onOverdubEnabledChange={setOverdubEnabled}
|
2025-11-18 08:43:38 +01:00
|
|
|
/>
|
2025-11-18 07:46:27 +01:00
|
|
|
</div>
|
|
|
|
|
|
2025-11-17 21:57:31 +01:00
|
|
|
{/* Import Track Dialog */}
|
|
|
|
|
<ImportTrackDialog
|
|
|
|
|
open={importDialogOpen}
|
|
|
|
|
onClose={() => setImportDialogOpen(false)}
|
|
|
|
|
onImportTrack={handleImportTrack}
|
|
|
|
|
/>
|
2025-11-18 16:15:04 +01:00
|
|
|
|
|
|
|
|
{/* Global Settings Dialog */}
|
|
|
|
|
<GlobalSettingsDialog
|
|
|
|
|
open={settingsDialogOpen}
|
|
|
|
|
onClose={() => setSettingsDialogOpen(false)}
|
|
|
|
|
recordingSettings={recordingSettings}
|
|
|
|
|
onInputGainChange={setInputGain}
|
|
|
|
|
onRecordMonoChange={setRecordMono}
|
|
|
|
|
onSampleRateChange={setSampleRate}
|
|
|
|
|
/>
|
2025-11-18 23:29:18 +01:00
|
|
|
|
|
|
|
|
{/* Export Dialog */}
|
|
|
|
|
<ExportDialog
|
|
|
|
|
open={exportDialogOpen}
|
|
|
|
|
onClose={() => setExportDialogOpen(false)}
|
|
|
|
|
onExport={handleExport}
|
|
|
|
|
isExporting={isExporting}
|
2025-11-19 07:47:56 +01:00
|
|
|
hasSelection={tracks.some(t => t.selection !== null)}
|
2025-11-18 23:29:18 +01:00
|
|
|
/>
|
2025-11-17 20:03:40 +01:00
|
|
|
</>
|
feat: implement Phase 2 - Web Audio API engine and waveform visualization
Phase 2 Complete Features:
- Web Audio API context management with browser compatibility
- Audio file upload with drag-and-drop support
- Audio decoding for multiple formats (WAV, MP3, OGG, FLAC, AAC, M4A)
- AudioPlayer class with full playback control
- Waveform visualization using Canvas API
- Real-time waveform rendering with progress indicator
- Playback controls (play, pause, stop, seek)
- Volume control with mute/unmute
- Timeline scrubbing
- Audio file information display
Components:
- AudioEditor: Main editor container
- FileUpload: Drag-and-drop file upload component
- AudioInfo: Display audio file metadata
- Waveform: Canvas-based waveform visualization
- PlaybackControls: Transport controls with volume slider
Audio Engine:
- lib/audio/context.ts: AudioContext management
- lib/audio/decoder.ts: Audio file decoding utilities
- lib/audio/player.ts: AudioPlayer class for playback
- lib/waveform/peaks.ts: Waveform peak generation
Hooks:
- useAudioPlayer: Complete audio player state management
Types:
- types/audio.ts: TypeScript definitions for audio types
Features Working:
✓ Load audio files via drag-and-drop or file picker
✓ Display waveform with real-time progress
✓ Play/pause/stop controls
✓ Seek by clicking on waveform or using timeline slider
✓ Volume control with visual feedback
✓ Audio file metadata display (duration, sample rate, channels)
✓ Toast notifications for user feedback
✓ SSR-safe audio context initialization
✓ Dark/light theme support
Tech Stack:
- Web Audio API for playback
- Canvas API for waveform rendering
- React 19 hooks for state management
- TypeScript for type safety
Build verified and working ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:32:00 +01:00
|
|
|
);
|
|
|
|
|
}
|