Compare commits

...

245 Commits

Author SHA1 Message Date
fa69ac649c fix: prevent playhead marker from continuing after pause
Fixed race condition where animation frames could continue after pause() by
adding an isPlayingRef that's checked at the start of updatePlaybackPosition.
This ensures any queued animation frames will exit early if pause was called,
preventing the playhead from continuing to move when audio has stopped.

This fixes the issue where the playhead marker would occasionally keep moving
after hitting spacebar to pause, even though no audio was playing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 12:53:13 +01:00
efd4cfa607 fix: adjust timeline right padding to match scrollbar width
Changed timeline right padding from 240px to 250px to account for the
vertical scrollbar width in the waveforms area, ensuring perfect alignment
between timeline and waveform playhead markers.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 12:49:40 +01:00
0d8952ca2f fix: ensure consistent timeline-waveform alignment with always-visible scrollbar
Changed waveforms container from overflow-y-auto to overflow-y-scroll to ensure
the vertical scrollbar is always visible. This maintains consistent width alignment
between the timeline and waveform areas, preventing playhead marker misalignment
that occurred when scrollbar appeared/disappeared based on track count.

Fixes the issue where timeline was slightly wider than waveforms when vertical
scrollbar was present, causing ~15px offset in playhead position.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 12:45:13 +01:00
1b30931615 style: add custom-scrollbar styling to all dialog scrollable areas
Added custom-scrollbar class to scrollable elements in:
- EffectBrowser dialog content area
- ProjectsDialog projects list
- CommandPalette results list
- Modal content area
- TrackList automation parameter label containers

This ensures consistent scrollbar styling across all dialogs and UI elements.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 12:37:16 +01:00
d9bd8246c9 fix: prevent infinite loop in waveform rendering
- Add safety check for duration === 0 (first track scenario)
- Ensure trackWidth doesn't exceed canvas width
- Use Math.floor for integer loop iterations
- Prevent division by zero in samplesPerPixel calculation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 11:24:31 +01:00
dd8d46795a fix: waveform rendering to respect track duration vs project duration
- Calculate trackWidth based on track duration ratio to project duration
- Shorter tracks now render proportionally instead of stretching
- Longer tracks automatically update project duration
- All existing tracks re-render correctly when duration changes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 11:17:28 +01:00
adcc97eb5a fix: timeline tooltip positioning to account for left padding
- Add controlsWidth offset to tooltip left position
- Tooltip now appears correctly at mouse cursor position

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 10:36:40 +01:00
1855988b83 fix: timeline zoom and waveform rendering improvements
- Fix timeline width calculation to always fill viewport at minimum
- Fix waveform sampling to match timeline width calculation exactly
- Fix infinite scroll loop by removing circular callback
- Ensure scrollbars appear correctly when zooming in
- Use consistent PIXELS_PER_SECOND_BASE = 5 across all components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 10:34:31 +01:00
477a444c78 feat: implement TimeScale component with proper zoom calculation
- Add TimeScale component with canvas-based rendering
- Use 5 pixels per second base scale (duration * zoom * 5)
- Implement viewport-based rendering for performance
- Add scroll synchronization with waveforms
- Add 240px padding for alignment with track controls and master area
- Apply custom scrollbar styling
- Update all waveform width calculations to match timeline

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 10:12:13 +01:00
119c8c2942 feat: implement medium effort features - markers, web workers, and bezier automation
Implemented three major medium effort features to enhance the audio editor:

**1. Region Markers System**
- Add marker type definitions supporting point markers and regions
- Create useMarkers hook for marker state management
- Build MarkerTimeline component for visual marker display
- Create MarkerDialog component for adding/editing markers
- Add keyboard shortcuts: M (add marker), Shift+M (next), Shift+Ctrl+M (previous)
- Support marker navigation, editing, and deletion

**2. Web Worker for Computations**
- Create audio worker for offloading heavy computations
- Implement worker functions: generatePeaks, generateMinMaxPeaks, normalizePeaks, analyzeAudio, findPeak
- Build useAudioWorker hook for easy worker integration
- Integrate worker into Waveform component with peak caching
- Significantly improve UI responsiveness during waveform generation

**3. Bezier Curve Automation**
- Enhance interpolateAutomationValue to support Bezier curves
- Implement cubic Bezier interpolation with control handles
- Add createSmoothHandles for auto-smooth curve generation
- Add generateBezierCurvePoints for smooth curve rendering
- Support bezier alongside existing linear and step curves

All features are type-safe and integrate seamlessly with the existing codebase.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 08:25:33 +01:00
8720c35f23 fix: add missing 'S' keyboard shortcut for split at cursor
Added keyboard handler for the 'S' key to trigger split at cursor.
The shortcut was defined in the command palette but missing from
the keyboard event handler.

Note: Ctrl+A for Select All was already working correctly.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:56:02 +01:00
25ddac349b feat: add copy/paste automation data functionality
Implemented automation clipboard system:
- Added separate automationClipboard state for automation points
- Created handleCopyAutomation function to copy automation lane points
- Created handlePasteAutomation function to paste at current time with time offset
- Added Copy and Clipboard icon buttons to AutomationHeader component
- Automation points preserve curve type and value when copied/pasted
- Points are sorted by time after pasting
- Toast notifications for user feedback
- Ready for integration when automation lanes are actively used

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:51:24 +01:00
08b33aacb5 feat: add split at cursor functionality
Implemented track splitting at playback cursor:
- Added handleSplitAtCursor function to split selected track at current time
- Extracts two audio segments: before and after cursor position
- Creates two new tracks from the segments with numbered names
- Removes original track after split
- Added 'Split at Cursor' command to command palette (keyboard shortcut: S)
- Validation for edge cases (no track selected, no audio, invalid position)
- User feedback via toast notifications

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:38:07 +01:00
9007522e18 feat: add playback speed control (0.25x - 2x)
Implemented variable playback speed functionality:
- Added playbackRate state and ref to useMultiTrackPlayer (0.25x - 2x range)
- Applied playback rate to AudioBufferSourceNode.playbackRate
- Updated timing calculations to account for playback rate
- Real-time playback speed adjustment for active playback
- Dropdown UI control in PlaybackControls with preset speeds
- Integrated changePlaybackRate function through AudioEditor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:35:39 +01:00
a47bf09a32 feat: add loop playback functionality
Added complete loop functionality with UI controls:
- Loop state management in useMultiTrackPlayer (loopEnabled, loopStart, loopEnd)
- Automatic restart from loop start when reaching loop end during playback
- Loop toggle button in PlaybackControls with Repeat icon
- Loop points UI showing when loop is enabled (similar to punch in/out)
- Manual loop point adjustment with number inputs
- Quick set buttons to set loop points to current time
- Wired loop functionality through AudioEditor component

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 07:31:53 +01:00
aba26126cc refactor: remove configurable track height setting
Removed the defaultTrackHeight setting from UI preferences as it doesn't need
to be user-configurable. All tracks now use DEFAULT_TRACK_HEIGHT constant (400px).

Changes:
- Removed defaultTrackHeight from UISettings interface
- Removed track height slider from GlobalSettingsDialog
- Updated AudioEditor to use DEFAULT_TRACK_HEIGHT constant directly
- Simplified dependency arrays in addTrack callbacks

This simplifies the settings interface while maintaining the same visual behavior.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 20:55:45 +01:00
66a515ba79 fix: remove AudioWorklet browser warning
Removed the AudioWorklet compatibility check warning since it's an optional
feature that doesn't affect core functionality. The app works fine without
AudioWorklet support, using standard Web Audio API nodes instead.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 20:52:28 +01:00
691f75209d docs: mark Phase 15.2 Responsive Design as complete
Updated PLAN.md to reflect completion of mobile responsiveness enhancements:
- Touch gesture support via collapse/expand chevron buttons
- Collapsible track and master controls with detailed state descriptions
- Track collapse buttons on mobile (dual chevron system)
- Mobile vertical stacking with automation and effects bars
- Height synchronization between track controls and waveform containers

Phase 15.2 now fully complete with comprehensive mobile support.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 20:51:15 +01:00
908e6caaf8 feat: enhance mobile responsiveness with collapsible controls and automation/effects bars
Added comprehensive mobile support for Phase 15 (Polish & Optimization):

**Mobile Layout Enhancements:**
- Track controls now collapsible on mobile with two states:
  - Collapsed: minimal controls with expand chevron, R/M/S buttons, horizontal level meter
  - Expanded: full height fader, pan control, all buttons
- Track collapse buttons added to mobile view (left chevron for track collapse, right chevron for control collapse)
- Master controls collapse button hidden on desktop (lg:hidden)
- Automation and effects bars now available on mobile layout
- Both bars collapsible with eye/eye-off icons, horizontally scrollable when zoomed
- Mobile vertical stacking: controls → waveform → automation → effects per track

**Bug Fixes:**
- Fixed track controls and waveform container height matching on desktop
- Fixed Modal component prop: isOpen → open in all dialog components
- Fixed TypeScript null check for audioBuffer.duration
- Fixed keyboard shortcut category: 'help' → 'view'

**Technical Improvements:**
- Consistent height calculation using trackHeight variable
- Proper responsive breakpoints with Tailwind (sm:640px, lg:1024px)
- Progressive disclosure pattern for mobile controls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 20:50:44 +01:00
e09bc1449c docs: mark Phase 14 (Settings & Preferences) as complete
Updated PLAN.md to reflect completion of Phase 14:
- Global settings system with localStorage persistence
- 5 settings tabs: Recording, Audio, Editor, Interface, Performance
- Real-time application to editor behavior
- All major settings implemented and functional

Applied settings:
- defaultTrackHeight → new track creation
- defaultZoom → initial zoom state
- enableSpectrogram → analyzer visibility
- sampleRate → recording configuration

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:32:19 +01:00
d03080d3d2 feat: apply enableSpectrogram and sampleRate settings to editor
Applied performance and audio settings to actual editor behavior:
- enableSpectrogram: conditionally show/hide spectrogram analyzer option
- sampleRate: sync recording sample rate with global audio settings

Changes:
- Switch analyzer view away from spectrogram when setting is disabled
- Adjust analyzer toggle grid columns based on spectrogram visibility
- Sync recording hook's sampleRate with global settings via useEffect

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:29:05 +01:00
484e3261c5 feat: apply defaultTrackHeight and defaultZoom settings
- Modified createTrack and createTrackFromBuffer to accept height parameter
- Updated useMultiTrack to pass height when creating tracks
- Applied settings.ui.defaultTrackHeight when adding new tracks
- Applied settings.editor.defaultZoom for initial zoom state
- Removed duplicate useSettings hook declaration in AudioEditor
- New tracks now use configured default height from settings

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:23:41 +01:00
a2cef6cc6e refactor: remove waveform color setting to preserve dynamic coloring
- Removed waveformColor from UISettings interface
- Removed waveform color picker from Interface settings tab
- Preserves dynamic per-track waveform coloring system
- Cleaner settings UI with one less option

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:19:38 +01:00
b1c0ff6f72 fix: constrain fader handles to track lane boundaries
- Fader handles now respect top-8 bottom-8 track padding
- Handle moves only within the visible track lane (60% range)
- Updated both TrackFader and MasterFader components
- Value calculation clamped to track bounds (32px padding top/bottom)
- Handle position mapped to 20%-80% range instead of 0%-100%
- Prevents handle from going beyond visible track area

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:15:27 +01:00
197bff39fc fix: show Plus button only when effects panel is expanded
- Plus button now conditionally rendered based on track.effectsExpanded
- Matches automation controls behavior
- Cleaner UI when panel is collapsed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:11:06 +01:00
314fced79f feat: Plus button now opens EffectBrowser dialog
- Added EffectBrowser import and state management
- Plus button opens dialog to select from all available effects
- Effect is added to track when selected from browser
- Removed hardcoded low-pass filter addition
- Better UX for adding effects to tracks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:09:46 +01:00
0bd892e3d1 fix: streamline effects header controls and add Plus button
- Removed absolute positioning from eye icon button
- Added Plus button to add effects (currently adds low-pass filter)
- All controls now properly inline in normal flex flow
- Eye button correctly positioned at end without overlap

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:07:06 +01:00
42991092ad fix: automation header controls now properly inline without overlapping
- Removed absolute positioning from eye icon button
- Made all controls part of normal flex flow
- Eye button now correctly positioned at end without overlap
- Chevron controls no longer overlay the eye icon

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 18:03:41 +01:00
adb99a2c33 feat: implement Phase 14 settings & preferences with localStorage persistence
Added comprehensive settings system with 5 categories:
- Recording Settings (existing, integrated)
- Audio Settings (buffer size, sample rate, auto-normalize)
- Editor Settings (auto-save interval, undo limit, snap-to-grid, grid resolution, default zoom)
- Interface Settings (theme, waveform color, font size, default track height)
- Performance Settings (peak/waveform quality, spectrogram toggle, max file size)

Features:
- useSettings hook with localStorage persistence
- Automatic save/load of all settings
- Category-specific reset buttons
- Expanded GlobalSettingsDialog with 5 tabs
- Full integration with AudioEditor
- Settings merge with defaults on load (handles updates gracefully)

Settings are persisted to localStorage and automatically restored on page load.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 16:39:05 +01:00
5d9e02fe95 feat: streamline track and master controls layout consistency
- Streamlined track controls and master controls to same width (240px)
- Fixed track controls container to use full width of parent column
- Matched TrackControls card structure with MasterControls (gap-3, no w-full/h-full)
- Updated outer container padding from p-2 to p-4 with gap-4
- Adjusted track controls wrapper to center content instead of stretching
- Added max-width constraint to PlaybackControls to prevent width changes
- Centered transport control buttons in footer

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 16:32:49 +01:00
854e64b4ec fix: add missing cn import to TrackList
- Import cn utility function from @/lib/utils/cn
- Fixes ReferenceError: cn is not defined
- Required for conditional classNames on effect labels

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:21:39 +01:00
e7bd262e6f feat: show bypassed effects with gray labels
- Effect labels show in primary color when enabled
- Effect labels show in gray when bypassed/disabled
- Added opacity reduction (60%) for bypassed effects
- Visual feedback matches effect state at a glance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:20:32 +01:00
ba2e138ab9 feat: add effect name labels to effects header
- Display effect names as small chips/badges in effects header
- Shows all effect names at a glance without expanding
- Styled with primary color background and border
- Horizontal scrollable if many effects
- Visible whether effects rack is expanded or collapsed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:19:05 +01:00
7b4a7cc567 fix: remove effects count from effects header
- Remove '(N)' count display from effects bar header
- Effect names are already shown in EffectDevice component:
  - Collapsed: vertical text label
  - Expanded: header with full name
- Cleaner header appearance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:17:47 +01:00
64864cfd34 feat: use eye icon for effects bar folding like automation bar
- Replace ChevronDown/ChevronRight with Eye/EyeOff icons
- Position eye icon absolutely on the right side
- Match AutomationHeader styling with bg-muted/50 and border
- Eye icon shows when expanded, EyeOff when collapsed
- Consistent UX between automation and effects bars

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:16:02 +01:00
14a9c6e163 feat: restore automation controls with AutomationHeader component
- Import and use AutomationHeader component in automation bar
- Add parameter selection dropdown (Volume, Pan, Effect parameters)
- Add automation mode controls (Read, Write, Touch, Latch)
- Add lane height controls (increase/decrease buttons)
- Add show/hide toggle button
- Build available parameters dynamically from track effects
- All controls properly wired to update track automation state

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:13:12 +01:00
0babc469cc fix: enforce minimum 360px height for all tracks using Math.max
- Track component now uses Math.max to ensure track.height is at least MIN_TRACK_HEIGHT
- TrackList component also uses Math.max for consistent enforcement
- Fixes issue where tracks with old height values (340px, 357px) were smaller
- All tracks now guaranteed to be exactly 360px regardless of stored height

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:09:44 +01:00
6658bbbbd4 fix: set minimum and default track height to 360px
- Update DEFAULT_TRACK_HEIGHT from 340px to 360px
- Update MIN_TRACK_HEIGHT from 240px to 360px
- Ensures all tracks have consistent 360px minimum height
- Applies to both control column and waveform column

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:07:10 +01:00
9b07f28995 fix: use DEFAULT_TRACK_HEIGHT constant for consistent track heights
- Import DEFAULT_TRACK_HEIGHT and COLLAPSED_TRACK_HEIGHT from types
- Use DEFAULT_TRACK_HEIGHT (340px) instead of hardcoded 240px fallback
- Ensures all tracks have the same height as the first track
- Matches the height used when creating tracks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 13:02:38 +01:00
d08482a64c fix: restore horizontal scrollbar by making waveform use container height
- Remove fixed height from Track waveformOnly mode
- Use h-full class to fill flex container instead
- Allows parent scroll container to show horizontal scrollbar based on zoom
- Waveform now properly expands horizontally without clipping

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:58:47 +01:00
1a66669a77 fix: remove wrapper div to show automation lane controls properly
- Remove h-32 wrapper div around AutomationLane
- AutomationLane now uses its own height (lane.height, defaults to 80px)
- Automation controls (canvas, points) now visible and interactive

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:55:42 +01:00
4c794dd293 fix: implement fixed-height track container with flexible waveform
- Track container now has fixed height (240px default or track.height)
- Waveform uses flex-1 to take remaining space
- Automation and effects bars use flex-shrink-0 for fixed height
- When bars expand, waveform shrinks instead of container growing
- Matches DAW behavior where track height stays constant

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:54:45 +01:00
29de647b30 refactor: use stacked layout for waveform, automation, and effects bars
- Replace absolute positioning with flex column layout
- Waveform, automation bar, and effects bar now stacked vertically
- Removes gaps between bars naturally with stacked layout
- Both bars remain collapsible with no position dependencies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:50:43 +01:00
83ae2e7ea7 fix: remove border-t from effects bar to eliminate gap with automation bar
- Removed border-t from effects bar container
- Keeps border-b for bottom edge separation
- No gap between automation and effects bars in both folded and unfolded states

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:43:00 +01:00
950c0f69a6 fix: automation bar now properly foldable with default parameter
- Default selectedParameterId to 'volume' when undefined
- Fixes issue where clicking automation header did nothing
- Chevron now correctly shows fold/unfold state

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:40:45 +01:00
a2542ac87f fix: remove gap between automation and effects bars
- Remove border-t from automation bar to eliminate gap with effects bar

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:35:41 +01:00
7aebc1da24 fix: add missing Sparkles import and correct AutomationLane props
- Add Sparkles icon import to TrackExtensions
- Remove invalid props from AutomationLane (trackId, isPlaying, onSeek)
- Fix createAutomationPoint call to use object parameter with curve property

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:32:30 +01:00
42b8f61f5f fix: make automation bar collapsible and restore automation controls
Fixed issues:
1. Made automation bar collapsible with chevron indicator
2. Removed border-t from automation lane container (no gap to effects)
3. Restored lane.visible filter so AutomationLane properly renders with controls

The AutomationLane component has all the rich editing UI:
- Click canvas to add automation points
- Drag points to move them
- Double-click to delete points
- Keyboard delete (Del/Backspace) for selected points
- Visual feedback with selected state

All controls are intact and functional when automation bar is expanded.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:18:58 +01:00
35a6ee35d0 fix: remove A/E buttons, make automation always expanded, add bottom border
Fixed all three issues:
1. Removed automation (A) and effects (E) toggle buttons from TrackControls
2. Made automation bar always expanded and non-collapsible
3. Added bottom border to effects bar (border-b)

Changes:
- TrackControls.tsx: Removed entire Row 2 (A/E buttons section)
- TrackList.tsx: Removed click handler and chevron from automation header
- TrackList.tsx: Automation lane always visible (no conditional rendering)
- TrackList.tsx: Added border-b to effects bar container
- TrackList.tsx: Added border-b to automation bar for visual consistency

Bars are now permanent, always-visible UI elements at the bottom of tracks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:16:17 +01:00
235fc3913c fix: position bars at bottom, always visible, no inner containers
Major improvements to automation and effects bars:
- Both bars now positioned at bottom of waveform (not top)
- Bars are always visible when track is expanded (no show/hide buttons)
- Effects bar at absolute bottom
- Automation bar above effects, dynamically positioned based on effects state
- Removed inner container from effects - direct rendering with EffectDevice
- Removed close buttons (X icons) - bars are permanent
- Effects render directly with gap-3 padding, no TrackExtensions wrapper
- Automation controls preserved (AutomationLane unchanged)

This creates a cleaner, always-accessible interface where users can quickly
expand/collapse automation or effects without toggling visibility.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:12:26 +01:00
0e59870884 refactor: replace overlay cards with integrated collapsible bars
Changes automation and effects from card-based overlays to integrated bars:
- Automation bar positioned at absolute top with no margins/padding
- Effects bar positioned below automation or at top if automation hidden
- Both bars use bg-card/90 backdrop-blur-sm for subtle transparency
- Collapsible with ChevronDown/ChevronRight indicators
- Close button (X icon) on each bar
- Effects bar dynamically positions based on automation state
- Added effectsExpanded property to Track type
- Removed card container styling for cleaner integration

The bars now sit directly on the waveform as requested, making them feel
more integrated into the track view rather than floating overlays.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 12:07:23 +01:00
8c779ccd88 fix: improve overlay visibility with header controls and lighter backdrop
Changes to automation and effects overlays:
- Reduced backdrop opacity from bg-black/60 to bg-black/30 (less dark)
- Added header to automation overlay with parameter name display
- Added close button to automation overlay (ChevronDown icon)
- Wrapped automation lane in rounded card with border and shadow
- Both overlays now have consistent visual styling
- Added ChevronDown import to TrackList

This makes the overlays less obtrusive and adds proper controls for closing
the automation view, matching the effects overlay pattern.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:56:59 +01:00
b57ac5912a feat: implement overlay architecture for automation lanes and effects
Automation lanes and effects now render as overlays on top of the waveform
instead of below the track, solving both visibility and layout proportion issues.

Changes:
- Wrapped Track waveforms in relative container for overlay positioning
- Automation lanes render with bg-black/60 backdrop-blur overlay
- Effects render with TrackExtensions in overlay mode (asOverlay prop)
- Added overlay-specific rendering with close button and better empty state
- Both overlays use absolute positioning with z-10 for proper stacking
- Eliminated height mismatch between controls and waveform areas

This approach provides better visual integration and eliminates the need
to match heights between the two-column layout.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:53:38 +01:00
d2ed7d6e78 fix: effects expansion, automation lanes, and layout alignment
Fixed multiple issues with the track layout system:

1. Effect cards now expandable/collapsible
   - Added onToggleExpanded callback to EffectDevice
   - Effect expansion state is properly toggled and persisted

2. Removed left column spacers causing misalignment
   - Removed automation lane spacer (h-32)
   - Removed effects section spacer (h-64/h-8)
   - Automation lanes and effects now only in waveform column
   - This eliminates the height mismatch between columns

3. Layout now cleaner
   - Left column stays fixed with only track controls
   - Right column contains waveforms, automation, and effects
   - No artificial spacers needed for alignment

The automation lanes and effects sections now appear properly in the
waveform area without creating alignment issues in the controls column.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:45:38 +01:00
cd310ce7e4 fix: ImportDialog not respecting open prop causing unwanted display
Fixed critical bug where ImportDialog was always rendered regardless of
the open prop value. The component interface didn't match the actual
implementation, causing it to appear on page load.

Changes:
- Added `open` prop to ImportDialogProps interface
- Added early return when `open` is false to prevent rendering
- Renamed `onCancel` to `onClose` to match Track component usage
- Made fileName, sampleRate, and channels optional props
- Dialog now only appears when explicitly opened by user action

This fixes the issue where the dialog appeared immediately on page load
when loading a saved project.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:39:34 +01:00
594ff7f4c9 fix: improve effects panel styling with padding and gap
Added padding around effects device rack and gap between effect cards
for better visual integration and spacing.

Changes:
- Added p-3 padding to effects rack container
- Added gap-3 between effect device cards
- Improves visual consistency and readability

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:37:45 +01:00
ca63d12cbf fix: prevent multiple ImportDialog instances from appearing
Fixed issue where ImportDialog was being rendered for each track in
waveform mode, causing multiple unclosable dialogs to appear on page load.

Changes:
- Moved ImportDialog to renderControlsOnly mode only
- Each track now has exactly one ImportDialog (rendered in controls column)
- Removed duplicate ImportDialog from renderWaveformOnly mode
- Removed ImportDialog from fallback return statement

This ensures only one dialog appears per track, making it properly closable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:30:03 +01:00
7a7d6891cd fix: restore automation lanes and effects sections in two-column layout
Restored the automation lane and effects sections that were removed during
the Track component refactoring. These sections now render as full-width
rows below each track, spanning across both the controls and waveforms columns.

Changes:
- Created TrackExtensions component for effects section rendering
- Added automation lane rendering in TrackList after each track waveform
- Added placeholder spacers in left controls column to maintain alignment
- Effects section shows collapsible device rack with mini preview when collapsed
- Automation lanes render when track.automation.showAutomation is true
- Import dialog moved to waveform-only rendering mode

The automation and effects sections are now properly unfoldable again.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:24:38 +01:00
90e66e8bef fix: restore border between track controls and waveforms
Added back the right border on the track controls column to visually
separate the controls from the waveform area.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:16:23 +01:00
e0b878daad fix: add bottom padding to track controls to compensate for scrollbar
Added pb-3 (padding-bottom) to the track controls column to account
for the horizontal scrollbar height in the waveforms column, ensuring
track controls stay perfectly aligned with their waveforms.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:13:55 +01:00
39ea599f18 fix: synchronize vertical scrolling between track controls and waveforms
Track controls now stay perfectly aligned with their waveforms during
vertical scrolling. The waveform column handles all scrolling (both
horizontal and vertical), and synchronizes its vertical scroll position
to the controls column.

Changes:
- Removed independent vertical scroll from controls column
- Added scroll event handler to waveforms column
- Controls column scrollTop is synced with waveforms column
- Track controls and waveforms now stay aligned at all times

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:11:38 +01:00
45d46067ea fix: move useRef before early return to comply with Rules of Hooks
Fixed React Hooks error by moving waveformScrollRef declaration
before the conditional early return. Hooks must always be called
in the same order on every render.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:07:15 +01:00
d7dfb8a746 feat: implement synchronized horizontal scrolling for track waveforms
Refactored track layout to use a two-column design where:
- Left column: All track control panels (fixed, vertical scroll)
- Right column: All waveforms (shared horizontal scroll container)

This ensures all track waveforms scroll together horizontally when
zoomed in, providing a more cohesive DAW-like experience.

Changes:
- Added renderControlsOnly and renderWaveformOnly props to Track component
- Track component now supports conditional rendering of just controls or just waveform
- TrackList renders each track twice: once for controls, once for waveforms
- Waveforms share a common scrollable container for synchronized scrolling
- Track controls stay fixed while waveforms scroll horizontally together

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 11:05:27 +01:00
5dadba9c9f fix: waveform fills viewport width when zoom is 1.0
Modified the waveform minWidth calculation to only apply when
zoom > 1, so the waveform fills the available viewport space
at default zoom level instead of creating unnecessary horizontal
scrolling.

Changes:
- minWidth only applied when zoom > 1
- At zoom = 1.0, waveform takes 100% of available space
- Scrollbar only appears when actually zoomed in

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:52:14 +01:00
cd311d8145 fix: make only waveform area scrollable, keep track controls fixed
Fixed horizontal scrollbar implementation to only scroll the waveform
content area while keeping track controls fixed in the viewport.

Changes:
- Wrapped waveform content in a scrollable container with overflow-x-auto
- Moved minWidth calculation to inner container
- Track controls now stay fixed at 192px width
- No scrollbar appears on initial track load
- Scrollbar only appears when zooming extends content beyond viewport

Resolves issue where entire track row (including controls) was
scrolling out of view when zoomed in.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:50:09 +01:00
c7cb0b2504 fix: waveform container now expands with zoom for horizontal scrolling 2025-11-19 10:42:11 +01:00
b8d4053cbc fix: add horizontal scrollbar for zoomed waveforms 2025-11-19 10:39:15 +01:00
ac8aa9e6c6 feat: complete Phase 13 - Keyboard Shortcuts
Added all remaining keyboard shortcuts for a complete keyboard-driven workflow:

Playback Shortcuts:
- Home: Jump to start
- End: Jump to end
- Left/Right Arrow: Seek ±1 second
- Ctrl+Left/Right: Seek ±5 seconds

Editing Shortcuts:
- Ctrl+A: Select All (entire track content)

View Shortcuts:
- Ctrl+Plus/Equals: Zoom in
- Ctrl+Minus: Zoom out
- Ctrl+0: Fit to view

All shortcuts work seamlessly with existing shortcuts:
- Spacebar: Play/Pause
- Ctrl+Z/Y: Undo/Redo
- Ctrl+X/C/V: Cut/Copy/Paste
- Ctrl+S: Save
- Ctrl+D: Duplicate
- Delete/Backspace: Delete
- Escape: Clear selection

The editor is now fully controllable via keyboard for a professional
audio editing workflow similar to Audacity/Reaper.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:36:58 +01:00
7de75f7b2b refactor: export/import projects as ZIP files instead of JSON
Changed from single JSON file to ZIP archive format to avoid string
length limits when serializing large audio buffers.

New ZIP structure:
- project.json (metadata, track info, effects, automation)
- track_0.wav, track_1.wav, etc. (audio files in WAV format)

Benefits:
- No more RangeError: Invalid string length
- Smaller file sizes (WAV compression vs base64 JSON)
- Human-readable audio files in standard format
- Can extract and inspect individual tracks
- Easier to edit/debug project metadata

Added jszip dependency for ZIP file handling.
Changed file picker to accept .zip files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:30:20 +01:00
543eb069d7 docs: mark Phase 12.3 (Export/Import) as complete 2025-11-19 10:25:30 +01:00
a626427142 feat: implement Phase 12.3 - Project Export/Import
Added full export/import functionality for projects:

Export Features:
- Export button for each project in Projects dialog
- Downloads project as JSON file with all data
- Includes tracks, audio buffers, effects, automation, settings
- Filename format: {project_name}_{timestamp}.json

Import Features:
- Import button in Projects dialog header
- File picker for .json files
- Automatically generates new project ID to avoid conflicts
- Appends "(Imported)" to project name
- Preserves all project data

This enables:
- Backup of projects outside the browser
- Sharing projects with collaborators
- Migration between computers/browsers
- Version snapshots at different stages

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:25:06 +01:00
9ad504478d docs: mark Phase 12 (Project Management) as complete
Completed features:
- IndexedDB project storage with full serialization
- Projects dialog UI for managing projects
- Auto-save (3-second debounce, silent)
- Manual save with Ctrl+S keyboard shortcut
- Auto-load last project on startup
- Editable project name in header
- Delete and duplicate project functionality
- Project metadata tracking (created/updated timestamps)

Phase 12.3 (Export/Import JSON) remains for future implementation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:14:12 +01:00
bcf439ca5e feat: add manual save with Ctrl+S and suppress auto-save toasts
Changes:
- Modified handleSaveProject to accept showToast parameter (default: false)
- Auto-saves now run silently without toast notifications
- Added Ctrl+S / Cmd+S keyboard shortcut for manual save with toast
- Added "Save Project" and "Open Projects" to command palette
- Error toasts still shown for all save failures

This provides the best of both worlds:
- Automatic background saves don't interrupt the user
- Manual saves (Ctrl+S or command palette) provide confirmation
- Users can work without being constantly notified

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:10:32 +01:00
31af08e9f7 fix: stabilize auto-save by using ref for currentTime
The handleSaveProject callback had currentTime in its dependencies, which
caused the callback to be recreated on every playback frame update. This
made the auto-save effect reset its timer constantly, preventing auto-save
from ever triggering.

Solution: Use a ref to capture the latest currentTime value without
including it in the callback dependencies. This keeps the callback stable
while still saving the correct currentTime.

Added debug logging to track auto-save scheduling and triggering.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:05:21 +01:00
1b41fca393 fix: serialize automation data to prevent DataCloneError
Deep clone automation data using JSON.parse(JSON.stringify()) to remove
any functions before saving to IndexedDB. This prevents DataCloneError
when trying to store non-serializable data.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 10:02:37 +01:00
67abbb20cb fix: serialize effects properly to avoid DataCloneError
Effects contain non-serializable data like functions and audio nodes that
cannot be stored in IndexedDB. Added proper serialization.

Changes:
- Added serializeEffects function to strip non-serializable data
- Uses JSON parse/stringify to deep clone parameters
- Preserves effect type, name, enabled state, and parameters
- Fixes DataCloneError when saving projects with effects

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:56:02 +01:00
0d86cff1b7 feat: disable localStorage persistence and add auto-load last project
**useMultiTrack.ts:**
- Removed localStorage persistence (tracks, effects, automation)
- Now relies entirely on IndexedDB via project management system
- Simpler state management without dual persistence

**AudioEditor.tsx:**
- Store last project ID in localStorage when saving
- Auto-load last project on page mount
- Only runs once per session with hasAutoLoaded flag
- Falls back to empty state if project can't be loaded

**Benefits:**
- No more conflicts between localStorage and IndexedDB
- Effects and automation properly persisted
- Seamless experience - reload page and your project is ready
- Single source of truth (IndexedDB)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:51:27 +01:00
abd2a403cb debug: add logging for project save/load operations 2025-11-19 09:45:29 +01:00
102c67dc8d fix: improve auto-save to trigger on project name changes
Changed auto-save from 30s interval to 3s debounce, and added
currentProjectName to dependencies so it saves when name changes.

**Changes:**
- Auto-save now triggers 3 seconds after ANY change (tracks or name)
- Added `currentProjectName` to auto-save effect dependencies
- Removed `onBlur` handler from input (auto-save handles it)
- Added tooltip "Click to edit project name"
- Faster feedback - saves 3s after typing stops instead of 30s

**User Experience:**
- Edit project name → auto-saves after 3s
- Add/modify tracks → auto-saves after 3s
- No need to manually save or wait 30 seconds
- Toast notification confirms save

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:41:39 +01:00
b743f97276 fix: project loading and add editable project name
Fixed project loading to restore all track properties and added inline
project name editing in header.

**AudioEditor.tsx:**
- Added `loadTracks` from useMultiTrack hook
- Fixed `handleLoadProject` to use `loadTracks()` instead of recreating tracks
  - Now properly restores all track properties (effects, automation, volume, pan, etc.)
  - Shows track count in success toast message
- Added editable project name input in header
  - Positioned between logo and track actions
  - Auto-sizes based on text length
  - Saves on blur (triggers auto-save)
  - Smooth hover/focus transitions
  - Muted color that brightens on interaction

**useMultiTrack.ts:**
- Added `loadTracks()` method to replace all tracks at once
- Enables proper project loading with full state restoration
- Maintains all track properties during load

**Fixes:**
- Projects now load correctly with all tracks and their audio buffers
- Track properties (effects, automation, volume, pan, etc.) fully restored
- Project name can be edited inline in header
- Auto-save triggers when project name changes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:32:34 +01:00
d3a5961131 feat: implement Phase 12.2 - Project Management UI Integration
Integrated complete project management system with auto-save:

**AudioEditor.tsx - Full Integration:**
- Added "Projects" button in header toolbar (FolderOpen icon)
- Project state management (currentProjectId, currentProjectName, projects list)
- Comprehensive project handlers:
  - `handleOpenProjectsDialog` - Opens dialog and loads project list
  - `handleSaveProject` - Saves current project to IndexedDB
  - `handleNewProject` - Creates new project with confirmation
  - `handleLoadProject` - Loads project and restores all tracks/settings
  - `handleDeleteProject` - Deletes project with cleanup
  - `handleDuplicateProject` - Creates project copy
- Auto-save effect: Saves project every 30 seconds when tracks exist
- ProjectsDialog component integrated with all handlers
- Toast notifications for all operations

**lib/storage/projects.ts:**
- Re-exported ProjectMetadata type for easier importing
- Fixed type exports

**Key Features:**
- **Auto-save**: Automatically saves every 30 seconds
- **Project persistence**: Full track state, audio buffers, effects, automation
- **Smart loading**: Restores zoom, track order, and all track properties
- **Safety confirmations**: Warns before creating new project with unsaved changes
- **User feedback**: Toast messages for all operations (save, load, delete, duplicate)
- **Seamless workflow**: Projects → Import → Export in logical toolbar order

**User Flow:**
1. Click "Projects" to open project manager
2. Create new project or load existing
3. Work on tracks (auto-saves every 30s)
4. Switch between projects anytime
5. Duplicate projects for experimentation
6. Delete old projects to clean up

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:26:57 +01:00
e1c19ffcb3 feat: implement Phase 12.1 - Project Management with IndexedDB
Added complete project save/load system using IndexedDB:

**New Files:**
- `lib/storage/db.ts` - IndexedDB database schema and operations
  - ProjectMetadata interface for project metadata
  - SerializedAudioBuffer and SerializedTrack for storage
  - Database initialization with projects object store
  - Audio buffer serialization/deserialization functions
  - CRUD operations for projects

- `lib/storage/projects.ts` - High-level project management service
  - Save/load project state with tracks and settings
  - List all projects sorted by last updated
  - Delete and duplicate project operations
  - Track serialization with proper type conversions
  - Audio buffer and effect chain handling

- `components/dialogs/ProjectsDialog.tsx` - Project list UI
  - Grid view of all projects with metadata
  - Project actions: Open, Duplicate, Delete
  - Create new project button
  - Empty state with call-to-action
  - Confirmation dialog for deletions

**Key Features:**
- IndexedDB stores complete project state (tracks, audio buffers, settings)
- Efficient serialization of AudioBuffer channel data
- Preserves all track properties (effects, automation, volume, pan)
- Sample rate and duration tracking
- Created/updated timestamps
- Type-safe with full TypeScript support

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:19:52 +01:00
e208a448d0 fix: remove fake FLAC export format
Removed the non-standard FLAC export option which was causing import issues:

**Problem:**
- "FLAC" export was actually a custom compressed WAV format
- Used fflate (DEFLATE compression) instead of real FLAC encoding
- Files couldn't be imported back or opened in other software
- No browser-compatible FLAC encoder library exists

**Changes:**
- Removed FLAC from export format options (WAV and MP3 only)
- Removed FLAC quality slider from ExportDialog
- Removed audioBufferToFlac function reference
- Updated ExportSettings interface to only include 'wav' | 'mp3'
- Simplified bit depth selector (WAV only instead of WAV/FLAC)
- Updated format descriptions to clarify lossy vs lossless

**Export formats now:**
-  WAV - Lossless, Uncompressed (16/24/32-bit)
-  MP3 - Lossy, Compressed (128/192/256/320 kbps)

Users can now successfully export and re-import their audio!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 09:08:17 +01:00
500a466bae feat: integrate ImportDialog into file upload flow
Now when users upload an audio file, they see the ImportDialog with options:

**User Flow:**
1. User selects audio file (click or drag-drop)
2. File is pre-decoded to extract metadata (sample rate, channels)
3. ImportDialog shows with file info and import options:
   - Convert to Mono (if stereo/multi-channel)
   - Resample Audio (44.1kHz - 192kHz)
   - Normalize on Import
4. User clicks Import (or Cancel)
5. File is imported with selected transformations applied
6. Track name auto-updates to filename

**Track.tsx Changes:**
- Added import dialog state (showImportDialog, pendingFile, fileMetadata)
- Updated handleFileChange to show dialog instead of direct import
- Added handleImport to process file with user-selected options
- Added handleImportCancel to dismiss dialog
- Renders ImportDialog when showImportDialog is true
- Logs imported audio metadata to console

Now users can see and control all import transformations!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 08:38:15 +01:00
37f910acb7 feat: complete Phase 11.4 - comprehensive audio file import
Implemented advanced audio import capabilities:

**Import Features:**
- Support for WAV, MP3, OGG, FLAC, M4A, AIFF formats
- Sample rate conversion using OfflineAudioContext
- Stereo to mono conversion (equal channel mixing)
- Normalize on import option (99% peak with 1% headroom)
- Comprehensive codec detection from MIME types and extensions

**API Enhancements:**
- ImportOptions interface (convertToMono, targetSampleRate, normalizeOnImport)
- importAudioFile() function returning buffer + metadata
- AudioFileInfo with AudioMetadata (codec, duration, channels, sample rate, file size)
- Enhanced decodeAudioFile() with optional import transformations

**UI Components:**
- ImportDialog component with import settings controls
- Sample rate selector (44.1kHz - 192kHz)
- Checkbox options for mono conversion and normalization
- File info display (original sample rate and channels)
- Updated FileUpload to show AIFF support

**Technical Implementation:**
- Offline resampling for quality preservation
- Equal-power channel mixing for stereo-to-mono
- Peak detection across all channels
- Metadata extraction with codec identification

Phase 11 (Export & Import) now complete!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 08:25:36 +01:00
c3e295f695 fix: use lamejs Mp3Encoder API for proper module initialization
Switched from low-level Lame API to Mp3Encoder class which:
- Properly initializes all required modules (Lame, BitStream, etc.)
- Handles module dependencies via setModules() calls
- Provides a simpler encodeBuffer/flush API
- Resolves "init_bit_stream_w is not defined" error

Updated TypeScript declarations to export Mp3Encoder and WavHeader
from lamejs/src/js/index.js

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 08:19:48 +01:00
51114330ea fix: use direct ES module imports from lamejs source files
Fixed MP3 export by importing lamejs modules directly from source:
- Import MPEGMode, Lame, and BitStream from individual source files
- Use Lame API directly instead of Mp3Encoder wrapper
- Updated TypeScript declarations for each module
- Resolves "MPEGMode is not defined" error

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 08:16:52 +01:00
d5c84d35e4 fix: install lamejs from GitHub repo for proper browser support
Switched from npm package to GitHub repo (github:zhuker/lamejs) which
includes the proper browser build. Reverted to simple dynamic import.

Fixes MP3 export MPEGMode reference error.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 08:11:46 +01:00
77916d4d07 fix: load lamejs from pre-built browser bundle
Changed MP3 export to dynamically load the pre-built lame.min.js
bundle from node_modules instead of trying to import the CommonJS
module. The browser bundle properly bundles all dependencies including
MPEGMode and exposes a global lamejs object.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 08:06:24 +01:00
8112ff1ec3 fix: add CommonJS compatibility for lamejs dynamic import
Fixed MP3 export error by handling both default and named exports
from lamejs module to support CommonJS/ESM interop.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 07:52:17 +01:00
df1314a37c docs: update PLAN.md for Phase 11.3 completion
Updated documentation to reflect export regions implementation:
- Marked Phase 11.3 (Export Regions) as complete
- Updated progress overview to show Phase 11.1-11.3 complete
- Added export scope details to Working Features
- Listed all three export modes: project, selection, tracks
- Updated phase status summary

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 07:49:10 +01:00
38a2b2962d feat: add export scope options (project/selection/tracks)
Implemented Phase 11.3 - Export Regions:
- Added export scope selector in ExportDialog
  - Entire Project: Mix all tracks into single file
  - Selected Region: Export only the selected region (disabled if no selection)
  - Individual Tracks: Export each track as separate file
- Updated ExportSettings interface with scope property
- Refactored handleExport to support all three export modes:
  - Project mode: Mix all tracks (existing behavior)
  - Selection mode: Extract selection from all tracks and mix
  - Tracks mode: Loop through tracks and export separately with sanitized filenames
- Added hasSelection prop to ExportDialog to enable/disable selection option
- Created helper function convertAndDownload to reduce code duplication
- Selection mode uses extractBufferSegment for precise region extraction
- Track names are sanitized for filenames (remove special characters)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 07:47:56 +01:00
c6ff313050 docs: update PLAN.md for Phase 11.1 & 11.2 completion
Updated documentation to reflect completed export features:
- Marked Phase 11.1 (Export Formats) as complete
- Marked Phase 11.2 (Export Settings) as complete
- Added technical implementation details for MP3 and FLAC
- Updated progress overview status
- Added Export Features section to Working Features list
- Updated Analysis Tools section to reflect Phase 10 completion

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 07:41:24 +01:00
6577d9f27b feat: add MP3 and FLAC export formats
Implemented Phase 11.1 export format support:
- Added MP3 export using lamejs library
- Added FLAC export using fflate DEFLATE compression
- Updated ExportDialog with format selector and format-specific options
  - MP3: bitrate selector (128/192/256/320 kbps)
  - FLAC: compression quality slider (0-9)
  - WAV: bit depth selector (16/24/32-bit)
- Updated AudioEditor to route export based on selected format
- Created TypeScript declarations for lamejs
- Fixed AudioStatistics to use audioBuffer instead of buffer property

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:14:32 +01:00
1c56e596b5 docs: mark Phase 10 complete in PLAN.md
All Phase 10 items now complete:
-  Frequency Analyzer & Spectrogram
-  Phase Correlation Meter
-  LUFS Loudness Meter
-  Audio Statistics with full metrics

Updated status to Phase 11 (Export & Import) in progress.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:06:51 +01:00
355bade08f feat: complete Phase 10 - add phase correlation, LUFS, and audio statistics
Implemented remaining Phase 10 analysis tools:

**Phase Correlation Meter (10.3)**
- Real-time stereo phase correlation display
- Pearson correlation coefficient calculation
- Color-coded indicator (-1 to +1 scale)
- Visual feedback: Mono-like, Good Stereo, Wide Stereo, Phase Issues

**LUFS Loudness Meter (10.3)**
- Momentary, Short-term, and Integrated LUFS measurements
- Simplified K-weighting approximation
- Vertical bar display with -70 to 0 LUFS range
- -23 LUFS broadcast standard reference line
- Real-time history tracking (10 seconds)

**Audio Statistics (10.4)**
- Project info: track count, duration, sample rate, channels, bit depth
- Level analysis: peak, RMS, dynamic range, headroom
- Real-time buffer analysis from all tracks
- Color-coded warnings for clipping and low headroom

**Integration**
- Added 5-button toggle in master column (FFT, SPEC, PHS, LUFS, INFO)
- All analyzers share consistent 192px width layout
- Theme-aware styling for light/dark modes
- Compact button labels for space efficiency

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 02:00:41 +01:00
461a800bb6 docs: update PLAN.md with Phase 10 progress
Marked completed items in Phase 10:
-  Frequency Analyzer (10.1) with real-time FFT display
-  Spectrogram (10.2) with time-frequency visualization
-  Peak/RMS metering (10.3) already complete
-  Clip indicator (10.3) for master channel

Added Analysis Tools section to Working Features.
Updated Current Status and Next Steps.

Remaining Phase 10 items:
- Phase correlation meter
- LUFS loudness meter
- Audio statistics display

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:54:39 +01:00
87771a5125 fix: resolve TypeScript type errors
Fixed several TypeScript issues:
- Changed 'formatter' to 'formatValue' in CircularKnob props
- Added explicit type annotations for formatValue functions
- Initialized animationFrameRef with undefined value
- Removed unused Waveform import from TrackControls

All type checks now pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:52:06 +01:00
90f9218ed3 fix: make spectrogram background match frequency analyzer
Changed spectrogram to use transparency for low values
instead of interpolating from background color:

- Low signal values (0-64) fade from transparent to blue
- Background color shows through transparent areas
- Now matches FrequencyAnalyzer theme behavior
- Both analyzers use bg-muted/30 wrapper for consistent theming

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:48:36 +01:00
3818d93696 fix: use theme-aware background colors for analyzers
Updated both FrequencyAnalyzer and Spectrogram to use
light gray background (bg-muted/30) that adapts to the theme:

- Wrapped canvas in bg-muted/30 container
- FrequencyAnalyzer reads parent background color for canvas fill
- Spectrogram interpolates from background color to blue for low values
- Both analyzers now work properly in light and dark themes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:46:27 +01:00
e4b3433cf3 fix: increase analyzer size to 192px width with proportional height
Increased analyzer dimensions from 160px to 192px width
and from 300px to 360px minimum height for better visibility.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:43:45 +01:00
d0601b2b36 fix: make analyzer cards match master fader width
Constrained analyzer components to 160px width to match
the MasterControls card width, creating better visual alignment
in the master column.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:42:25 +01:00
7dc0780bd2 feat: implement Phase 10.1 & 10.2 - frequency analyzer and spectrogram
Added real-time audio analysis visualizations to master column:

- FrequencyAnalyzer component with FFT bar display
  - Canvas-based rendering with requestAnimationFrame
  - Color gradient from cyan to green based on frequency
  - Frequency axis labels (20Hz, 1kHz, 20kHz)

- Spectrogram component with time-frequency waterfall display
  - Scrolling visualization with ImageData pixel manipulation
  - Color mapping: black → blue → cyan → green → yellow → red
  - Vertical frequency axis with labels

- Master column redesign
  - Fixed width layout (280px)
  - Toggle buttons to switch between FFT and Spectrum views
  - Integrated above master controls with 300px minimum height

- Exposed masterAnalyser from useMultiTrackPlayer hook
  - Analyser node now accessible to visualization components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:40:04 +01:00
4281c65ec1 fix: reduce master mute button size to 32x32px
Changed button dimensions from h-10 w-10 to h-8 w-8 for
a more compact, proportionate size.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:24:54 +01:00
b8ed648124 fix: reduce master mute button size to 40x40px
Changed button dimensions from h-12 w-12 to h-10 w-10 for
a more proportionate size.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:24:18 +01:00
a447a81414 fix: make master mute button square (48x48px)
Changed button dimensions from h-8 w-full to h-12 w-12 for a
square/quadratic shape matching the design of track buttons.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:23:37 +01:00
797d64b1d3 feat: redesign master mute button to match track style
Changed master mute button from Button component to match track style:
- Native button element with rounded-md styling
- Blue background when muted (bg-blue-500) with shadow
- Card background when unmuted with hover state
- Text-based "M" label instead of icons
- Larger size (h-8, text-[11px]) compared to track (h-5, text-[9px])
- Removed unused Volume2/VolumeX icon imports

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:22:03 +01:00
418c79d961 feat: add touch event support to knobs and faders
Added comprehensive touch event handling for mobile/tablet support:

CircularKnob.tsx:
- Added handleTouchStart, handleTouchMove, handleTouchEnd handlers
- Touch events use same drag logic as mouse events
- Prevents default to avoid scrolling while adjusting

TrackFader.tsx:
- Added touch event handlers for vertical fader control
- Integrated with existing onTouchStart/onTouchEnd callbacks
- Supports touch-based automation recording

MasterFader.tsx:
- Added touch event handlers matching TrackFader
- Complete touch support for master volume control

All components now work seamlessly on touch-enabled devices.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:19:03 +01:00
edebdc2129 fix: revert track heights and increase control card bottom padding
Reverted track heights to original values:
- DEFAULT_TRACK_HEIGHT: 360px → 340px
- MIN_TRACK_HEIGHT: 260px → 240px

Increased bottom padding of TrackControls from py-2 to pt-2 pb-4
for better spacing at the bottom of the control card.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:15:36 +01:00
a8570f2458 feat: increase minimum track height from 240px to 260px
Raised MIN_TRACK_HEIGHT to ensure proper spacing for all track
controls with the new border styling.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:14:12 +01:00
3cce3f8c05 feat: increase default track height from 340px to 360px
Increased DEFAULT_TRACK_HEIGHT to provide more vertical space for
track controls and waveform viewing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:13:05 +01:00
9a161bbe42 fix: reduce background opacity to 50% for subtler appearance
Changed background from bg-card to bg-card/50 for both TrackControls
and MasterControls to make the border more prominent.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:10:33 +01:00
5c85914974 fix: increase border and background opacity for better visibility
Increased opacity for both TrackControls and MasterControls:
- bg-muted/10 → bg-muted/20 (background)
- border-accent/30 → border-accent/50 (border)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:07:52 +01:00
d34611ef10 feat: add border styling to TrackControls to match MasterControls
Added consistent border styling:
- bg-muted/10 background
- border-2 border-accent/30
- rounded-lg corners
- px-3 horizontal padding

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:06:46 +01:00
1ebd169137 fix: set TrackFader spacing to 16px to match MasterFader
Both track and master faders now have consistent 16px margin-left.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:05:40 +01:00
9140110589 fix: increase fader horizontal spacing (12px track, 16px master)
Increased margin-left values for better visibility:
- TrackFader: 8px → 12px
- MasterFader: 12px → 16px

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:04:43 +01:00
628c544511 fix: add horizontal spacing to faders (8px track, 12px master)
Shifted faders to the right for better visual alignment:
- TrackFader: 8px margin-left
- MasterFader: 12px margin-left

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:03:42 +01:00
f7a7c4420c fix: revert to 1 decimal place for fader labels
Changed back from .toFixed(2) to .toFixed(1) while keeping the fixed
widths to prevent component movement.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:02:17 +01:00
33be21295e fix: add fixed width to fader labels and show 2 decimal places
Added fixed widths to prevent component movement when values change:
- TrackFader: w-[32px] for value display
- MasterFader: w-[36px] for value/peak/RMS display

Changed decimal precision from .toFixed(1) to .toFixed(2) for more
accurate dB readings (e.g., -3.11 instead of -3.1).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 01:00:57 +01:00
34452862ca fix: remove double dB conversion from track levels for consistent metering
Track levels were being converted to dB scale twice:
1. First in useMultiTrackPlayer via linearToDbScale()
2. Again in TrackFader via linearToDb()

This caused tracks to show incorrect meter levels compared to master.
Now both track and master levels store raw linear values (0-1) and
let the fader components handle the single dB conversion for display.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:55:46 +01:00
7a45a985c7 fix: convert master meter levels to dB scale for consistent metering
Previously, master meters received raw linear values (0-1) while track
meters received dB-normalized values, causing inconsistent metering display.
Now both master peak and RMS levels are converted using linearToDbScale()
for accurate comparison between track and master levels.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:50:27 +01:00
bb30aa95e1 fix: standardize meter colors to use 500 shades
- Changed RMS meter colors from 400 to 500 shades (brighter)
- Both Peak and RMS meters now use same color brightness
- Applied to both TrackFader and MasterFader
- Provides better visual consistency between the two meter types

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:42:40 +01:00
f830640732 feat: shift faders 8px to the right
- Added ml-2 (8px left margin) to TrackFader
- Added ml-2 (8px left margin) to MasterFader
- Both faders with their labels now shifted right for better alignment

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:40:34 +01:00
1d35c8f5b2 feat: center fader with knob above and buttons below
- Restructured TrackControls layout using flexbox justify-between
- Pan knob positioned at top
- Fader vertically centered in flex-1 middle section
- R/S/M/A/E buttons positioned at bottom in two rows
- Main container uses h-full for proper vertical distribution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:38:55 +01:00
87b1c2e21a feat: increase default track height to 340px
- Increased default track height from 320px to 340px
- Provides more breathing room for all track controls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:33:39 +01:00
43cdf9abdd feat: improve track controls layout and increase track height
- Reverted buttons to two rows (R/S/M and A/E) for better organization
- Changed button borders from 'rounded' to 'rounded-md' for consistency
- Centered pan knob and fader in their containers
- Increased spacing from gap-1.5 py-1.5 to gap-2 py-2
- Increased default track height from 300px to 320px
- Increased minimum track height from 220px to 240px
- All buttons now clearly visible with proper spacing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:31:52 +01:00
935ab85c08 feat: add active state to automation and effects toggle buttons
- Added showEffects prop to TrackControls interface
- Effects button (E) now shows active state with primary color when toggled
- Automation button (A) already had active state functionality
- Both buttons now provide clear visual feedback when active
- Updated Track component to pass showEffects state to TrackControls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:25:33 +01:00
8ec3505581 feat: enhance track controls and improve master fader layout
- Integrated all R/S/A/E buttons into TrackControls component
- Removed duplicate button sections from Track component
- Updated CircularKnob pan visualization to show no arc at center position
- Arc now extends from center to value for directional indication
- Moved MasterControls to dedicated right sidebar
- Removed master controls from PlaybackControls footer
- Optimized TrackControls spacing (gap-1.5, smaller buttons)
- Cleaner separation between transport and master control sections

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:22:52 +01:00
441920ee70 feat: redesign track and master controls with integrated fader+meters+pan
Unified design language across all tracks and master section:
- Created TrackFader component: vertical fader with horizontal meter bars
- Created TrackControls: integrated pan + fader + mute in compact layout
- Created MasterFader: similar design but larger for master output
- Created MasterControls: master version with pan + fader + mute
- Updated Track component to use new TrackControls
- Updated PlaybackControls to use new MasterControls
- Removed old VerticalFader and separate meter components

New features:
- Horizontal peak/RMS meter bars behind fader (top=peak, bottom=RMS)
- Color-coded meters (green/yellow/red based on dB levels)
- dB scale labels and numeric readouts
- Integrated mute button in controls
- Consistent circular pan knobs
- Professional DAW-style channel strip appearance
- Master section includes clip indicator

Visual improvements:
- Unified design across all tracks and master
- Compact vertical layout saves space
- Real-time level monitoring integrated with volume control
- Smooth animations and transitions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-19 00:08:36 +01:00
c33a77270b feat: complete Phase 10.3 - master metering with peak/RMS display
Added comprehensive master output level monitoring:
- Created MasterMeter component with dual vertical bars (peak + RMS)
- Implemented real-time level monitoring in useMultiTrackPlayer hook
- Added master analyser node connected to audio output
- Displays color-coded levels (green/yellow/red) based on dB thresholds
- Shows numerical dB readouts for both peak and RMS
- Includes clickable clip indicator with reset functionality
- Integrated into PlaybackControls next to master volume

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 23:56:53 +01:00
c54d5089c5 feat: complete Phase 9.3 - automation recording with write/touch/latch modes
Implemented comprehensive automation recording system for volume, pan, and effect parameters:

- Added automation recording modes:
  - Write: Records continuously during playback when values change
  - Touch: Records only while control is being touched/moved
  - Latch: Records from first touch until playback stops

- Implemented value change detection (0.001 threshold) to prevent infinite loops
- Fixed React setState-in-render errors by:
  - Using queueMicrotask() to defer state updates
  - Moving lane creation logic to useEffect
  - Properly memoizing touch handlers with useMemo

- Added proper value ranges for effect parameters:
  - Frequency: 20-20000 Hz
  - Q: 0.1-20
  - Gain: -40-40 dB

- Enhanced automation lane auto-creation with parameter-specific ranges
- Added touch callbacks to all parameter controls (volume, pan, effects)
- Implemented throttling (100ms) to avoid excessive automation points

Technical improvements:
- Used tracksRef and onRecordAutomationRef to ensure latest values in animation loops
- Added proper cleanup on playback stop
- Optimized recording to only trigger when values actually change

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 23:29:18 +01:00
a1f230a6e6 docs: update Phase 9 automation completion status
Phase 9 automation is now mostly complete!

 Completed:
- 9.1: Automation lanes UI (volume, pan, effects with dropdown selector)
- 9.2: Automation points (add/remove/drag/select/delete)
- 9.3: Real-time automation playback (volume, pan, effects)
  - Continuous evaluation via requestAnimationFrame
  - Proper parameter range conversion
  - Works with all effect types

 Remaining:
- Automation recording (write mode)
- Mode-specific recording behavior (touch/latch)
- Copy/paste automation
- Bezier curve support

The automation system is now fully functional for playback!
2025-11-18 19:08:43 +01:00
4c6453d115 feat: add effect parameter automation and fix mode logic
Completed Phase 9.3 - Full Automation Playback:
-  Effect parameter automation implementation
-  Fixed automation mode logic (now applies in all modes)
-  Automatic parameter range conversion (normalized to actual values)

Effect parameter automation:
- Parses effect parameter IDs (format: effect.{effectId}.{paramName})
- Finds corresponding effect nodes in audio graph
- Converts normalized 0-1 automation values to actual parameter ranges
- Applies parameters using updateEffectParameters during playback
- Works with all effect types (filters, dynamics, time-based, etc.)

Automation mode fix:
- Removed incorrect mode !== 'read' checks
- Automation now plays back in all modes (read/write/touch/latch)
- Mode will control recording behavior, not playback

Technical notes:
- Used type assertion (as any) for dynamic parameter updates
- Maintains parameter range from automation lane valueRange
- Integrated with existing effect update mechanism

Phase 9 Status:
 9.1: Automation lanes UI complete
 9.2: Automation points complete
 9.3: Real-time playback (volume, pan, effects) complete
 9.3: Automation recording (next milestone)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 19:08:18 +01:00
dac8ac4723 feat: implement real-time automation playback for volume and pan
Phase 9.3 - Automation Playback:
- Added real-time automation evaluation during playback
- Automation values are applied continuously via requestAnimationFrame
- Volume automation: Interpolates between points and applies to gain nodes
- Pan automation: Converts 0-1 values to -1 to 1 for StereoPannerNode

Implementation details:
- New applyAutomation() function runs in RAF loop alongside level monitoring
- Evaluates automation at current playback time using evaluateAutomationLinear
- Applies values using setValueAtTime for smooth Web Audio API parameter changes
- Automation loop lifecycle matches playback (start/pause/stop/cleanup)
- Respects automation mode (only applies when mode !== 'read')

Technical improvements:
- Added automationFrameRef for RAF management
- Proper cleanup in pause(), unmount, and playback end scenarios
- Integrated with existing effect chain restart mechanism
- Volume automation multiplied with track gain (mute/solo state)

Next steps:
- Effect parameter automation (TODO in code)
- Automation recording (write mode implementation)
- Touch and latch modes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 19:06:02 +01:00
1666407ac2 docs: update Phase 9 automation progress in PLAN.md
Completed automation features:
-  9.1: Automation lanes UI (volume, pan, effects with dropdown)
-  9.2: Automation points (add/remove/drag/select/delete)
-  UI: Automation modes selector (R/W/T/L)

Remaining automation work:
-  9.3: Real-time automation playback
-  9.3: Automation recording
-  9.3: Automation modes implementation
-  9.2: Copy/paste automation, bezier curves
2025-11-18 19:02:57 +01:00
30ebd52b4c refactor: improve effects panel layout
Effects panel improvements:
- Removed track name label header
- Moved "Add Effect" button to top left corner
- Button now uses self-start alignment for left positioning
- Cleaner, more minimal design

Layout changes:
- Button appears consistently in top left whether effects exist or not
- More space for effect devices
- Better visual hierarchy

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 19:00:59 +01:00
623284e8b5 fix: match automation controls width to track controls (w-48 = 192px)
Fixed playhead marker alignment issue:
- Changed automation controls from w-[180px] to w-48 (192px)
- Now matches track controls panel width exactly
- Playhead marker now aligns perfectly between waveform and automation

Technical details:
- Track controls use Tailwind's w-48 class (12rem = 192px)
- Automation controls were using w-[180px] causing 12px misalignment
- Both sidebars now use consistent w-48 sizing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:59:31 +01:00
a5c5289424 refactor: move automation controls to left sidebar, simplify layout
Major automation UX improvements:
- Moved all automation controls to left sidebar (180px, matching track controls)
  - Parameter dropdown selector
  - Automation mode button (R/W/T/L with color coding)
  - Height adjustment buttons (+/-)
- Automation canvas now fills right side (matching waveform width exactly)
- Removed AutomationHeader component (no longer needed)
- Removed eye icon (automation visibility controlled by "A" button on track)

Two-column layout consistency:
- Left: 180px sidebar with all controls
- Right: Flexible canvas area matching waveform width
- Perfect vertical alignment between waveform and automation

Simplified AutomationLane component:
- Now only renders the canvas area with points
- All controls handled in parent Track component
- Cleaner, more maintainable code structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:57:51 +01:00
3cc4cb555a fix: align automation lane with waveform, improve header layout
Automation lane improvements:
- Lane now aligns exactly with waveform width using two-column layout
- Added 180px left spacer to match track controls sidebar
- Playhead marker now aligns perfectly with waveform

Automation header improvements:
- Dropdown has fixed width (min-w-[120px] max-w-[200px]) instead of flex-1
- Eye icon (show/hide) is now positioned absolutely on the right
- Cleaner, more compact header layout

Visual consistency:
- Removed redundant border-b from AutomationLane (handled by parent)
- Automation lane and waveform now perfectly aligned vertically

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:53:08 +01:00
eb445bfa3a refactor: single automation lane with parameter dropdown and fixed-height effects panel
Phase 9 Automation Improvements:
- Replaced multiple automation lanes with single lane + parameter selector
- Added dropdown in automation header to switch between parameters:
  - Track parameters: Volume, Pan
  - Effect parameters: Dynamically generated from effect chain
- Lanes are created on-demand when parameter is selected
- Effects panel now has fixed height (280px) with scroll
  - Panel no longer resizes when effects are expanded
  - Maintains consistent UI layout

Technical changes:
- Track.automation.selectedParameterId: Tracks current parameter
- AutomationHeader: Added parameter dropdown props
- AutomationLane: Passes parameter selection to header
- Track.tsx: Single lane rendering with IIFE for parameter list
- Effects panel: h-[280px] with flex layout and overflow-y-auto

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:49:44 +01:00
d2709b8fc2 refactor: move effects panel from global to per-track
- Added showEffects property to Track type
- Added "E" button with Sparkles icon to toggle per-track effects
- Effects panel now appears below each track when toggled
- Removed global EffectsPanel from AudioEditor
- Updated useMultiTrack to persist showEffects state
- Streamlined workflow: both automation and effects are now per-track

This aligns the UX with professional DAWs like Ableton Live, where
effects and automation are track-scoped rather than global.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:44:06 +01:00
03a7e2e485 feat: implement Phase 9.1 - Automation lanes UI
Added comprehensive automation lane UI with Ableton-style design:

**Automation Components:**
- AutomationLane: Canvas-based rendering with grid lines, curves, and points
- AutomationHeader: Parameter name, mode selector, value display
- AutomationPoint: Interactive draggable points with hover states

**Automation Utilities:**
- createAutomationLane/Point: Factory functions
- evaluateAutomationLinear: Interpolation between points
- formatAutomationValue: Display formatting with unit support
- addAutomationPoint/updateAutomationPoint/removeAutomationPoint

**Track Integration:**
- Added "A" toggle button in track control panel
- Automation lanes render below waveform
- Default lanes for Volume (orange) and Pan (green)
- Support for add/edit/delete automation points
- Click to add, drag to move, double-click to delete

**Visual Design:**
- Dark background with horizontal grid lines
- Colored curves with semi-transparent fill (20% opacity)
- 4-6px automation points, 8px on hover
- Mode indicators (Read/Write/Touch/Latch) with colors
- Value labels and current value display

Ready for playback integration in next step.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:34:35 +01:00
f6857bfc7b fix: restore waveform click-to-seek functionality
Re-enabled mouse event handlers on waveform canvas that were accidentally
removed. Users can now:
- Click to seek to a specific position
- Drag to create selection regions

Also fixed TypeScript error by properly typing EffectType parameter in
handleAddEffect callback.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:18:17 +01:00
17381221d8 feat: refine UI with effects panel improvements and visual polish
Major improvements:
- Fixed multi-file import (FileList to Array conversion)
- Auto-select first track when adding to empty project
- Global effects panel folding state (independent of track selection)
- Effects panel collapsed/disabled when no track selected
- Effect device expansion state persisted per-device
- Effect browser with searchable descriptions

Visual refinements:
- Removed center dot from pan knob for cleaner look
- Simplified fader: removed volume fill overlay, dynamic level meter visible through semi-transparent handle
- Level meter capped at fader position (realistic mixer behavior)
- Solid background instead of gradient for fader track
- Subtle volume overlay up to fader handle
- Fixed track control width flickering (consistent 4px border)
- Effect devices: removed shadows/rounded corners for flatter DAW-style look, added consistent border-radius
- Added border between track control and waveform area

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 18:13:38 +01:00
839128a93f fix: add missing onUpdateTrack prop to Track component
Fixed "onUpdateTrack is not defined" error by:
- Added onUpdateTrack to TrackProps interface
- Added onUpdateTrack to Track component destructuring
- Passed onUpdateTrack prop from TrackList to Track

This prop is required for:
- Track height resizing functionality
- Automation lane updates
- General track property updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 16:44:19 +01:00
ecf7f060ec feat: Complete Ableton-style track layout redesign
Track height & spacing improvements:
- Increased DEFAULT_TRACK_HEIGHT from 180px to 240px for vertical controls
- Increased MIN_TRACK_HEIGHT from 60px to 120px
- Increased MAX_TRACK_HEIGHT from 300px to 400px
- Added COLLAPSED_TRACK_HEIGHT constant (48px)
- Reduced control panel gap from 8px to 6px for tighter spacing
- Added min-h-0 and overflow-hidden to prevent flex overflow
- Optimized fader container with max-h-[140px] constraint

Clip-style waveform visualization (Ableton-like):
- Wrapped waveform canvas in visible "clip" container
- Added border, rounded corners, and shadow for clip identity
- Added 16px clip header bar showing track name
- Implemented hover state for better interactivity
- Added gradient background from-foreground/5 to-transparent

Track height resize functionality:
- Added draggable bottom-edge resize handle
- Implemented cursor-ns-resize with hover feedback
- Constrain resizing to MIN/MAX height range
- Real-time height updates with smooth visual feedback
- Active state highlighting during resize

Effects section visual integration:
- Changed from solid background to gradient (from-muted/80 to-muted/60)
- Reduced device rack height from 192px to 176px for better proportion
- Improved visual hierarchy and connection to track row

Flexible VerticalFader component:
- Changed from fixed h-32 (128px) to flex-1 layout
- Added min-h-[80px] and max-h-[140px] constraints
- Allows parent container to control actual height
- Maintains readability and proportions at all sizes

CSS enhancements (globals.css):
- Added .track-clip-container utility class
- Added .track-clip-header utility class
- Theme-aware clip styling for light/dark modes
- OKLCH color space for consistent appearance

Visual results:
- Professional DAW appearance matching Ableton Live
- Clear clip/region boundaries for audio editing
- Better proportions for vertical controls (240px tracks)
- Resizable track heights (120-400px range)
- Improved visual hierarchy and organization

Files modified:
- types/track.ts (height constants)
- components/tracks/Track.tsx (layout + clip styling + resize)
- components/ui/VerticalFader.tsx (flexible height)
- app/globals.css (clip styling)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 16:41:58 +01:00
9b1eedc379 feat: Ableton Live-style effects and complete automation system
Enhanced visual design:
- Improved device rack container with darker background and inner shadow
- Device cards now have rounded corners, shadows, and colored indicators
- Better visual separation between enabled/disabled effects
- Active devices highlighted with accent border

Complete automation infrastructure (Phase 9):
- Created comprehensive type system for automation lanes and points
- Implemented AutomationPoint component with drag-and-drop editing
- Implemented AutomationHeader with mode controls (Read/Write/Touch/Latch)
- Implemented AutomationLane with canvas-based curve rendering
- Integrated automation lanes into Track component below effects
- Created automation playback engine with real-time interpolation
- Added automation data persistence to localStorage

Automation features:
- Add/remove automation points by clicking/double-clicking
- Drag points to change time and value
- Multiple automation modes (Read, Write, Touch, Latch)
- Linear and step curve types (bezier planned)
- Adjustable lane height (60-180px)
- Show/hide automation per lane
- Real-time value display at playhead
- Color-coded lanes by parameter type
- Keyboard delete support (Delete/Backspace)

Track type updates:
- Added automation field to Track interface
- Updated track creation to initialize empty automation
- Updated localStorage save/load to include automation data

Files created:
- components/automation/AutomationPoint.tsx
- components/automation/AutomationHeader.tsx
- components/automation/AutomationLane.tsx
- lib/audio/automation/utils.ts (helper functions)
- lib/audio/automation/playback.ts (playback engine)
- types/automation.ts (complete type system)

Files modified:
- components/effects/EffectDevice.tsx (Ableton-style visual improvements)
- components/tracks/Track.tsx (automation lanes integration)
- types/track.ts (automation field added)
- lib/audio/track-utils.ts (automation initialization)
- lib/hooks/useMultiTrack.ts (automation persistence)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 16:30:01 +01:00
dc9647731d feat: redesign track controls to Ableton Live style
Major UI Redesign:
- Reduced track control width from 288px to 192px (33% narrower)
- Replaced horizontal sliders with vertical fader and circular knob
- More compact, professional appearance matching Ableton Live
- Global settings dialog replaces inline recording settings

New Components Created:
- VerticalFader.tsx: Vertical volume control with integrated level meter
  - Shows volume dB at top, level dB at bottom
  - Level meter displayed as background gradient
  - Draggable handle for precise control

- CircularKnob.tsx: Rotary pan control
  - SVG-based rotary knob with arc indicator
  - Vertical drag interaction (200px sensitivity)
  - Displays L/C/R values

- GlobalSettingsDialog.tsx: Centralized settings
  - Tabbed interface (Recording, Playback, Interface)
  - Recording settings moved from inline to dialog
  - Accessible via gear icon in header
  - Modal dialog with backdrop

Track Control Panel Changes:
- Track name: More compact (text-xs)
- Buttons: Smaller (h-6 w-6), text-based S/M buttons
- Record button: Circle indicator instead of icon
- Pan: Circular knob (40px) instead of horizontal slider
- Volume: Vertical fader with integrated meter
- Removed: Inline recording settings panel

Header Changes:
- Added Settings button (gear icon) before ThemeToggle
- Opens GlobalSettingsDialog on click
- Clean, accessible from anywhere

Props Cleanup:
- Removed recordingSettings props from Track/TrackList
- Removed onInputGainChange, onRecordMonoChange, onSampleRateChange props
- Settings now managed globally via dialog

Technical Details:
- VerticalFader uses mouse drag for smooth control
- CircularKnob rotates -135° to +135° (270° range)
- Global event listeners for drag interactions
- Proper cleanup on unmount

Benefits:
 33% narrower tracks = more tracks visible
 Professional Ableton-style appearance
 Cleaner, less cluttered interface
 Global settings accessible anywhere
 Better use of vertical space
 Consistent with industry-standard DAWs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 16:15:04 +01:00
49dd0c2d38 feat: complete Phase 8.3 - recording settings (input gain, mono/stereo, sample rate)
Recording Settings (Phase 8.3):
- Added input gain control (0.0-2.0 with dB display)
- Real-time gain adjustment via GainNode during recording
- Mono/Stereo recording mode selection
- Sample rate matching (44.1kHz, 48kHz, 96kHz)
- Mono conversion averages all channels when enabled
- Recording settings panel shown when track is armed

Components Created:
- RecordingSettings.tsx: Settings panel with gain slider, mono/stereo toggle, sample rate buttons

Components Modified:
- useRecording hook: Added settings state and GainNode integration
- AudioEditor: Pass recording settings to TrackList
- TrackList: Forward settings to Track components
- Track: Show RecordingSettings when armed for recording

Technical Details:
- GainNode inserted between source and analyser in recording chain
- Real-time gain updates via gainNode.gain.value
- AudioContext created with target sample rate
- Mono conversion done post-recording by averaging channels
- Settings persist during recording session

Phase 8 Complete:
-  Phase 8.1: Audio Input
-  Phase 8.2: Recording Controls (punch/overdub)
-  Phase 8.3: Recording Settings
- 📋 Phase 9: Automation (NEXT)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:51:38 +01:00
5f0017facb feat: complete Phase 8.2 - punch-in/punch-out and overdub recording
Recording Controls (Phase 8.2):
- Added punch-in/punch-out UI controls with time inputs
- Punch mode toggle button with visual feedback
- Set punch times from current playback position
- Collapsible punch time editor shown when enabled
- Overdub mode toggle for layering recordings
- Overdub implementation mixes recorded audio with existing track audio

Components Modified:
- PlaybackControls: Added punch and overdub controls with icons
- AudioEditor: Implemented overdub mixing logic and state management
- PLAN.md: Marked Phase 8.1 and 8.2 as complete

Technical Details:
- Overdub mixes audio buffers by averaging samples to avoid clipping
- Handles multi-channel audio correctly
- Punch controls use AlignVerticalJustifyStart/End icons
- Overdub uses Layers icon for visual clarity

Phase 8 Progress:
-  Phase 8.1: Audio Input (complete)
-  Phase 8.2: Recording Controls (complete)
- 🔲 Phase 8.3: Recording Settings (next)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:44:13 +01:00
a5a75a5f5c feat: show volume off icon when track volume is 0
- Volume icon now switches between Volume2 and VolumeX based on volume value
- Matches behavior of master volume controls for consistency
- Icon size remains consistent at h-3.5 w-3.5

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:36:25 +01:00
85d27c3b13 feat: standardize icon sizes and replace pan icon with UnfoldHorizontal
- Replaced CircleArrowOutUpRight pan icon with UnfoldHorizontal
- Standardized all track control icons to h-3.5 w-3.5:
  - Volume, Pan, Mic, and Gauge icons now have consistent sizing
- Improves visual consistency across the track control panel

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:35:02 +01:00
546c80e4ae feat: display dB values and use gauge icon for level meters
Changed level meter display to show professional dB values instead
of percentages, and replaced volume icon with gauge icon.

Display Changes:
- Show dB values (-60 to 0) instead of percentage (0-100%)
- Use monospace font for consistent digit alignment
- Show "-∞" for silence (level = 0) instead of "-60"
- Rounded to nearest integer dB (no decimals for cleaner display)

Icon Updates:
- Replaced Volume2 icon with Gauge icon for playback levels
- Kept Mic icon for recording input levels
- Gauge better represents metering/measurement vs audio control

dB Conversion:
- Formula: dB = (normalized * 60) - 60
- Reverse of: normalized = (dB + 60) / 60
- Range: -60dB (silence) to 0dB (full scale)

Display Examples:
  0.00 (0%) = -∞   (silence)
  0.17 (17%) = -50dB (very quiet)
  0.50 (50%) = -30dB (moderate)
  0.83 (83%) = -10dB (loud)
  1.00 (100%) = 0dB (full scale)

Benefits:
 Professional dB notation matching industry standards
 Clearer for audio engineers and musicians
 Easier to identify levels relative to 0dBFS
 Gauge icon better conveys "measurement"
 Monospace font prevents number jumping

Now meters show "-18" instead of "70%" making it immediately
clear where you are in the dynamic range.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:28:35 +01:00
dc567d0144 feat: add professional gradient to level meters matching dB scale
Replaced solid color blocks with smooth gradient to match
professional audio metering standards and dB scale mapping.

The Problem:
- Hard color transitions (green/yellow/red) looked jarring
- Didn't match professional DAW aesthetics
- Color didn't reflect actual dB values visually

The Solution:
- Implemented CSS linear gradient across meter bar
- Gradient matches dB scale breakpoints:
  * Green: 0-70% (-60dB to -18dB) - Safe zone
  * Yellow: 70-90% (-18dB to -6dB) - Getting hot
  * Red: 90-100% (-6dB to 0dB) - Very loud/clipping

Gradient Details:
Horizontal: linear-gradient(to right, ...)
Vertical: linear-gradient(to top, ...)

Color stops:
  0%: rgb(34, 197, 94)   - Green start
 70%: rgb(34, 197, 94)   - Green hold
 85%: rgb(234, 179, 8)   - Yellow transition
100%: rgb(239, 68, 68)   - Red peak

Visual Behavior:
- Smooth color transition as levels increase
- Green dominates safe zone (-60dB to -18dB)
- Yellow appears in warning zone (-18dB to -6dB)
- Red shows critical/clipping zone (-6dB to 0dB)
- Matches Pro Tools, Logic Pro, Ableton Live style

Benefits:
 Professional appearance matching industry DAWs
 Smooth visual feedback instead of jarring transitions
 Colors accurately represent dB ranges
 Better user experience for mixing/mastering
 Gradient visible even at low levels

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:26:10 +01:00
db01209f77 feat: implement professional logarithmic dB scale for level meters
Converted level meters from linear to logarithmic (dB) scale to
match professional audio software behavior and human hearing.

The Problem:
- Linear scale (0-100%) doesn't match perceived loudness
- Doesn't match professional DAW meter behavior
- Half-volume audio appears at 50% but sounds much quieter
- No industry-standard dB reference

The Solution:
- Convert linear amplitude to dB: 20 * log10(linear)
- Normalize -60dB to 0dB range to 0-100% display
- Matches professional audio metering standards

dB Scale Mapping:
  0 dB (linear 1.0)    = 100% (full scale, clipping)
 -6 dB (linear ~0.5)   = 90%  (loud)
-12 dB (linear ~0.25)  = 80%  (normal)
-20 dB (linear ~0.1)   = 67%  (moderate)
-40 dB (linear ~0.01)  = 33%  (quiet)
-60 dB (linear ~0.001) = 0%   (silence threshold)

Implementation:
- Added linearToDbScale() function to both hooks
- useMultiTrackPlayer: playback level monitoring
- useRecording: input level monitoring
- Formula: (dB - minDb) / (maxDb - minDb)
- Range: -60dB (min) to 0dB (max)

Benefits:
 Professional audio metering standards
 Matches human perception of loudness
 Consistent with DAWs (Pro Tools, Logic, Ableton)
 Better visual feedback for mixing/mastering
 More responsive in useful range (-20dB to 0dB)

Now properly mastered tracks will show levels in the
90-100% range, matching what you'd see in professional software.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:23:33 +01:00
a0ce83a654 fix: use Float32Array for accurate full-range level measurement
Switched from Uint8Array to Float32Array for level monitoring
to get accurate, full-precision audio measurements.

The Problem:
- getByteTimeDomainData() uses Uint8Array (0-255)
- Byte conversion: (value - 128) / 128 has asymmetric range
- Positive peaks: (255-128)/128 = 0.992 (not full 1.0)
- Precision loss from byte quantization
- Mastered tracks with peaks at 0dBFS only showed ~50%

The Solution:
- Switched to getFloatTimeDomainData() with Float32Array
- Returns actual sample values directly in -1.0 to +1.0 range
- No conversion needed, no precision loss
- Accurate representation of audio peaks

Changes Applied:
- useMultiTrackPlayer: Float32Array with analyser.fftSize samples
- useRecording: Float32Array with analyser.fftSize samples
- Peak detection: Math.abs() on float values directly

Benefits:
 Full 0-100% range for properly mastered audio
 Higher precision (32-bit float vs 8-bit byte)
 Symmetric range (-1.0 to +1.0, not -1.0 to ~0.992)
 Accurate metering for professional audio files

Now mastered tracks with peaks at 0dBFS will correctly show
~100% on the meters instead of being capped at 50%.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:21:31 +01:00
3e6fbda755 fix: move analyser before gain node to show true audio levels
Repositioned analyser nodes in audio graph to measure raw audio
levels before volume/gain adjustments.

The Problem:
- Analyser was after gain node in signal chain
- Track volume defaults to 0.8 (80%)
- Audio was scaled down before measurement
- Meters only showed ~50% of actual audio peaks

The Solution:
- Moved analyser to immediately after source
- Now measures raw audio before any processing
- Shows true audio content independent of fader position

Audio Graph Changes:
Before: source -> gain -> pan -> effects -> analyser -> master
After:  source -> analyser -> gain -> pan -> effects -> master

Benefits:
 Meters show full 0-100% range based on audio content
 Meter reading independent of volume fader position
 Accurate representation of track audio levels
 Increased smoothingTimeConstant to 0.8 for smoother motion

This is how professional DAWs work - level meters show the
audio content, not the output level after the fader.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:17:42 +01:00
8367cbf6e7 fix: switch from RMS to peak detection for more accurate level meters
Changed level calculation from RMS to peak detection to show
more realistic and responsive meter values.

The Problem:
- RMS calculation produced values typically in 0-30% range
- Audio signals have low average RMS (0.1-0.3 for music)
- Meters appeared broken, never reaching higher levels

The Solution:
- Switched to peak detection (max absolute value)
- Peaks now properly show 0-100% range
- More responsive to transients and dynamics
- Matches typical DAW meter behavior

Algorithm Change:
Before (RMS):
  rms = sqrt(sum(normalized²) / length)

After (Peak):
  peak = max(abs(normalized))

Applied to Both:
- Recording input level monitoring (useRecording)
- Playback output level monitoring (useMultiTrackPlayer)

Benefits:
 Full 0-100% range utilization
 More responsive visual feedback
 Accurate representation of audio peaks
 Consistent with professional audio software

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:15:16 +01:00
a157172e3d fix: resolve playback level monitoring closure issue
Fixed playback level meters staying at 0% by resolving React closure
issue in the monitoring loop - same pattern as the recording fix.

The Problem:
- monitorPlaybackLevels callback checked stale `isPlaying` state
- Animation loop would run once and never continue
- Dependency on isPlaying caused callback recreation on every state change

The Solution:
- Added isMonitoringLevelsRef to track state independent of React
- Removed isPlaying dependency from callback (now has empty deps [])
- Set ref to true when starting playback
- Set ref to false when pausing, stopping, or ending playback
- Animation loop checks ref instead of stale closure state

Monitoring State Management:
- Start: play() sets isMonitoringLevelsRef.current = true
- Stop: pause(), stop(), onended, and cleanup set it to false
- Loop: continues while ref is true, stops when false

This ensures the requestAnimationFrame loop runs continuously
during playback and calculates real-time RMS levels for all tracks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:12:42 +01:00
6fbb677bd2 feat: implement real-time playback level monitoring for all tracks
Added comprehensive playback level monitoring system that shows
real-time audio levels during playback for each track.

useMultiTrackPlayer Hook:
- Added AnalyserNode for each track in audio graph
- Implemented RMS-based level calculation with requestAnimationFrame
- Added trackLevels state (Record<string, number>) tracking levels by track ID
- Insert analysers after effects chain, before master gain
- Monitor levels continuously during playback
- Clean up level monitoring on pause/stop

Audio Graph Chain:
source -> gain -> pan -> effects -> analyser -> master gain -> destination

AudioEditor Integration:
- Extract trackLevels from useMultiTrackPlayer hook
- Pass trackLevels down to TrackList component

TrackList & Track Components:
- Accept and forward trackLevels prop
- Pass playbackLevel to individual Track components
- Track component displays appropriate level:
  * Recording level (with "Input" label) when armed/recording
  * Playback level (with "Level" label) during normal playback

Visual Feedback:
- Color-coded meters: green -> yellow (70%) -> red (90%)
- Real-time percentage display
- Seamless switching between input and output modes

This completes Phase 8 (Recording) with full bidirectional level monitoring!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 15:01:55 +01:00
cf0c37caa6 fix: resolve recording level meter monitoring closure issue
Fixed the input level meter staying at 0% during recording by:

Closure Issue Resolution:
- Added isMonitoringRef to track monitoring state independent of React state
- Removed state dependencies from monitorInputLevel callback
- Animation loop now checks ref instead of stale closure state

Changes:
- Set isMonitoringRef.current = true when starting recording
- Set isMonitoringRef.current = false when stopping/pausing recording
- Animation frame continues while ref is true, stops when false
- Proper cleanup in stopRecording, pauseRecording, and unmount effect

This ensures the requestAnimationFrame loop continues properly and
updates the RMS level calculation in real-time during recording.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:56:30 +01:00
c0ca4d7913 fix: migrate existing tracks to new 180px height from localStorage
Added migration logic to useMultiTrack hook:
- When loading tracks from localStorage, check if height < DEFAULT_TRACK_HEIGHT
- Automatically upgrade old heights (120px, 150px) to new default (180px)
- Preserves custom heights that are already >= 180px

This fixes the inline style issue where existing tracks had
style="height: 120px" that was cutting off the level meter.

After this update, refreshing the page will automatically upgrade
all existing tracks to the new height without losing any data.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:53:31 +01:00
fa397a1dfe fix: always show level meter and increase track height to 180px
Fixed layout issues with the level meter:

Track Height:
- Increased DEFAULT_TRACK_HEIGHT from 150px to 180px
- Ensures enough vertical space for all controls without clipping

Level Meter Display:
- Now always visible (not conditional on recording state)
- Shows "Input" with mic icon when track is armed or recording
- Shows "Level" with volume icon when not recording
- Displays appropriate level based on state

This prevents the meter from being cut off and provides consistent
visual feedback for all tracks. Future enhancement: show actual
playback output levels when not recording.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:51:26 +01:00
2775fdb517 fix: increase default track height to accommodate input meter
Increased DEFAULT_TRACK_HEIGHT from 120px to 150px to properly fit:
- Volume slider
- Pan slider
- Input level meter (when track is armed or recording)

This ensures all controls have adequate vertical spacing and the input
meter doesn't get cramped when it appears under the pan control.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:47:58 +01:00
f65dd0be7f feat: complete Phase 8.2 - integrate recording into multi-track UI
Completed the integration of recording functionality into the AudioEditor:

Recording Controls:
- Added global record button to PlaybackControls component
- Implemented track arming system (arm track before recording)
- Added visual feedback with pulsing red animations during recording
- Connected recording state from useRecording hook to UI

AudioEditor Integration:
- Added handleToggleRecordEnable for per-track record arming
- Added handleStartRecording with permission checks and toast notifications
- Added handleStopRecording to save recorded audio to track buffer
- Connected recording input level monitoring to track meters

TrackList Integration:
- Pass recording props (onToggleRecordEnable, recordingTrackId, recordingLevel)
- Individual tracks show input level meters when armed or recording
- Visual indicators for armed and actively recording tracks

This completes Phase 8 (Recording) implementation with:
 MediaRecorder API integration
 Input level monitoring with RMS calculation
 Per-track record arming
 Global record button
 Real-time visual feedback
 Permission handling
 Error handling with user notifications

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:44:15 +01:00
5e6c61d951 feat: implement Phase 8.1 - audio recording infrastructure
Added recording capabilities to the multi-track editor:
- useRecording hook with MediaRecorder API integration
- Audio input device enumeration and selection
- Microphone permission handling
- Input level monitoring with RMS calculation
- InputLevelMeter component with visual feedback
- Record-enable button per track with pulsing indicator
- Real-time input level display when recording

Recording infrastructure is complete. Next: integrate into AudioEditor
for global recording control and buffer storage.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:37:01 +01:00
166385d29a fix: drag-to-select now works reliably without Shift key
Fixed async state update issue where selections were being cleared
immediately after creation. The mouseUp handler now checks drag
distance directly instead of relying on async state, ensuring
selections persist correctly.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:28:09 +01:00
1dc0604635 feat: add intuitive drag-to-select on waveforms
Improved selection UX to match professional DAWs:
- Drag directly on waveform to create selections (no modifier keys needed)
- Click without dragging seeks playhead and clears selection
- 3-pixel drag threshold prevents accidental selections
- Fixed variable name conflict with existing file drag-and-drop feature

Users can now naturally drag across waveforms to select regions for
editing, providing a more intuitive workflow.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 14:25:49 +01:00
74879a42cf feat: implement multi-track waveform selection and editing with undo/redo
Added comprehensive selection and editing capabilities to multi-track editor:
- Visual selection overlay with Shift+drag interaction on waveforms
- Multi-track edit commands (cut, copy, paste, delete, duplicate)
- Full keyboard shortcut support (Ctrl+X/C/V/D, Delete, Ctrl+Z/Y)
- Complete undo/redo integration via command pattern
- Per-track selection state with localStorage persistence
- Audio buffer manipulation utilities (extract, insert, delete, duplicate segments)
- Toast notifications for all edit operations
- Red playhead to distinguish from blue selection overlay

All edit operations are fully undoable and integrated with the existing
history manager system.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 13:05:05 +01:00
beb7085c89 feat: complete Phase 7.4 - real-time track effects system
Implemented comprehensive real-time effect processing for multi-track audio:

Core Features:
- Per-track effect chains with drag-and-drop reordering
- Effect bypass/enable toggle per effect
- Real-time parameter updates (filters, dynamics, time-based, distortion, bitcrusher, pitch, timestretch)
- Add/remove effects during playback without interruption
- Effect chain persistence via localStorage
- Automatic playback stop when tracks are deleted

Technical Implementation:
- Effect processor with dry/wet routing for bypass functionality
- Real-time effect parameter updates using AudioParam setValueAtTime
- Structure change detection for add/remove/reorder operations
- Stale closure fix using refs for latest track state
- ScriptProcessorNode for bitcrusher, pitch shifter, and time stretch
- Dual-tap delay line for pitch shifting
- Overlap-add synthesis for time stretching

UI Components:
- EffectBrowser dialog with categorized effects
- EffectDevice component with parameter controls
- EffectParameters for all 19 real-time effect types
- Device rack with horizontal scrolling (Ableton-style)

Removed offline-only effects (normalize, fadeIn, fadeOut, reverse) as they don't fit the real-time processing model.

Completed all items in Phase 7.4:
- [x] Per-track effect chain
- [x] Effect rack UI
- [x] Effect bypass per track
- [x] Real-time effect processing during playback
- [x] Add/remove effects during playback
- [x] Real-time parameter updates
- [x] Effect chain persistence

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 12:08:33 +01:00
cbcd38b1ed refactor: remove master effects section, add TODO note
Removed the master effects section from AudioEditor footer:
- Simplified footer to only show transport controls
- Removed master channel strip area
- Removed border separator between sections

Added note to PLAN.md:
- 🔲 Master channel effects (TODO)
- Will implement master effect chain UI later
- Similar to per-track effects but for final mix
- Documented in Phase 7 multi-track features section

This cleans up the UI for now - we'll add master effects
with a proper device rack UI later, similar to how per-track
effects work (collapsible section with horizontal device cards).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:43:38 +01:00
ecb5152d21 feat: boost light theme to true neon intensity
Cranked up the neon vibes in light theme to match dark theme energy!

Light Theme Neon Boost:
- Bright cyan-tinted backgrounds (180-200° hue)
- Vivid magenta primary (0.28 chroma - max intensity!)
- Rich magenta/purple foreground (20% lightness, 0.12 chroma)
- Electric cyan accent (90% lightness, 0.12 chroma, 180° hue)
- Saturated borders (0.08 chroma)
- Neon cyan waveforms (60% lightness, 0.26 chroma)
- Magenta progress bars (58% lightness, 0.28 chroma)

Color Intensity Increased:
- Primary: 0.28 chroma (was 0.22 - 27% boost!)
- Foreground: 0.12 chroma (was 0.05 - 140% boost!)
- Accent: 0.12-0.18 chroma (was 0.05-0.10 - doubled!)
- Borders: 0.08 chroma (was 0.03 - nearly tripled!)
- Waveforms: 0.26-0.28 chroma (was 0.22 - 20% boost!)

Hue Shift:
- Cyan dominance (180-200° backgrounds)
- Magenta accents (300-320° primaries/foregrounds)
- Electric color combinations throughout

Now both themes have matching neon energy:
- Dark: Neon cyberpunk (purple/cyan/magenta)
- Light: Bright neon pop (cyan/magenta/purple)

Both themes now pop with vibrant, eye-catching colors! 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:40:38 +01:00
bc7158dbe0 feat: vibrant neon color palette for both themes
Replaced gentle grays with eye-catching neon/vibrant colors!

Dark Theme - Neon Cyberpunk:
- Deep purple/blue backgrounds (265-290° hue)
- Bright neon magenta primary (75% lightness, 0.25 chroma, 310° hue)
- Neon cyan foreground (92% lightness, cyan tint)
- Vibrant borders with purple glow (30% lightness, 0.05 chroma)
- Electric accent colors throughout
- Neon cyan waveforms with magenta progress

Light Theme - Vibrant Pastels:
- Soft purple-tinted backgrounds (280-290° hue)
- Vivid magenta primary (55% lightness, 0.22 chroma)
- Rich purple foreground (25% lightness)
- Colorful borders and accents
- High saturation throughout (0.15-0.22 chroma)
- Vibrant purple/cyan waveforms

Color Intensity:
- Increased chroma from 0.14-0.15 to 0.20-0.25
- Much more saturated, punchy colors
- Vibrant accent colors (cyan, magenta, purple, green)
- Neon-style success (green), warning (yellow), destructive (red)

Hue Shifts:
- Purple/magenta theme (260-320° range)
- Cyan accents (180-200°)
- Warm warnings (80°)
- Green success (160°)

This gives a bold, modern, energetic aesthetic perfect
for a creative audio application - like FL Studio or
Ableton Live's colorful themes!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:39:23 +01:00
ee24f04d76 feat: add automatic waveform repaint on theme change
Implemented MutationObserver to detect theme changes:

- Added themeKey state that increments on theme change
- MutationObserver watches document.documentElement for class changes
- Detects when "dark" class is toggled
- Added themeKey to waveform effect dependencies
- Canvas automatically redraws with new theme colors

How it works:
1. Observer listens for class attribute changes on <html>
2. When dark mode is toggled, themeKey increments
3. useEffect dependency triggers canvas redraw
4. getComputedStyle reads fresh --color-waveform-bg value
5. Waveform renders with correct theme background

Benefits:
- Seamless theme transitions
- Waveform colors always match current theme
- No manual refresh needed
- Automatic cleanup on unmount

Now switching between light/dark themes instantly updates
all waveform backgrounds with the correct theme colors!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:36:49 +01:00
2a0d6cd673 fix: replace hardcoded colors with theme variables in Track component
Replaced hardcoded slate colors with theme-aware variables:

Track.tsx changes:
- Waveform area: bg-slate-900 → bg-waveform-bg
- Effects section: bg-slate-900/50 → bg-muted/50
- Expanded effects: bg-slate-900/30 → bg-muted/70
- Canvas background: rgb(15, 23, 42) → --color-waveform-bg

This ensures that:
- Light theme shows proper light gray backgrounds
- Dark theme shows proper dark gray backgrounds
- All components respect theme colors
- No more hardcoded colors that bypass the theme

Now the entire UI properly responds to theme changes with
the harmonized gray palettes we configured.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:34:14 +01:00
2948557e94 feat: soften light theme with gentler gray tones
Reduced contrast in light theme for softer, more comfortable appearance:

Background tones (darker, gentler):
- waveform-bg: 97% (was 99% - less bright)
- card: 96% (was 98% - softer)
- background: 93% (was 96% - more muted)
- muted: 90% (was 94% - darker)
- secondary/accent: 88% (was 92% - more visible)
- border: 85% (was 88% - stronger definition)
- input: 88% (was 92% - better contrast)

Foreground tones (lighter, softer):
- foreground: 28% (was 20% - less harsh)
- card-foreground: 28% (was 20% - softer)
- secondary-foreground: 32% (was 25% - lighter)
- muted-foreground: 48% (was 45% - easier to read)

Color adjustments:
- Reduced color intensity from 0.15 to 0.14
- Softer primary: oklch(58% 0.14 240)
- All colors slightly desaturated for gentler feel

Benefits:
- Reduced contrast reduces eye strain
- Softer, warmer appearance
- More comfortable for extended use
- Better balance between light and dark themes
- Modern low-contrast aesthetic
- Similar to Figma, Linear, or Notion's light modes

Both themes now have beautifully balanced, low-contrast
palettes that are easy on the eyes and professional.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:31:52 +01:00
0dfbdca00c feat: lighten dark theme for softer, gentler appearance
Significantly lightened all dark theme colors for reduced contrast:

Background tones (lighter, softer):
- waveform-bg: 16% (was 10% - too dark)
- background: 18% (was 12% - lifted significantly)
- card: 22% (was 15% - more visible)
- muted: 25% (was 18% - lighter)
- secondary: 28% (was 20% - softer)
- accent: 30% (was 22% - warmer)
- border: 32% (was 25% - more subtle)

Foreground tones (softer, less harsh):
- foreground: 78% (was 85% - reduced contrast)
- card-foreground: 78% (was 85% - softer)
- secondary-foreground: 75% (was 80% - gentler)
- muted-foreground: 58% (was 60% - slightly muted)

Color adjustments:
- Reduced color intensity (0.14 vs 0.15)
- Softer blue primary: oklch(68% 0.14 240)
- More muted accent colors throughout

Benefits:
- Much softer, gentler appearance
- Reduced contrast reduces eye strain
- More comfortable for long editing sessions
- Warmer, more inviting feel
- Modern low-contrast aesthetic
- Similar to VSCode's "Dark+" or Figma's dark mode

The theme now has a beautiful, soft low-contrast aesthetic
that's easy on the eyes while maintaining readability.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:30:24 +01:00
f74c8abbf9 feat: harmonize light theme with soft gray tones
Replaced harsh whites with a soft, professional light gray palette:

Light tones (lightest to darkest):
- waveform-bg: 99% - near-white for waveform area
- card: 98% - soft white for cards/panels
- background: 96% - main background (was 100% harsh white)
- muted: 94% - muted elements
- secondary/accent: 92% - secondary/accent elements
- input: 92% - input backgrounds
- border: 88% - borders and dividers

Dark tones (text, darker):
- foreground: 20% - main text (was 9.8% too dark)
- card-foreground: 20% - card text
- secondary-foreground: 25% - secondary text
- muted-foreground: 45% - muted text (lighter for better contrast)

Primary color:
- Changed to pleasant blue: oklch(55% 0.15 240)
- Matches dark theme primary color concept
- More modern than pure black

Benefits:
- Softer, more pleasant appearance
- Reduced eye strain in light mode
- Better harmony with dark mode
- Professional appearance like Figma, Linear, etc.
- Maintains excellent readability
- Consistent gray palette across both themes

Both light and dark themes now have harmonized gray palettes
that are easy on the eyes and professional-looking.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:28:23 +01:00
46ce75f3a5 feat: harmonize dark theme with softer gray tones
Replaced harsh white contrasts with a harmonized gray palette:

Background tones (darkest to lightest):
- waveform-bg: 10% - deepest black for waveform area
- background: 12% - main app background
- card: 15% - card/panel backgrounds
- muted: 18% - muted elements
- secondary: 20% - secondary elements
- accent: 22% - accent elements
- border: 25% - borders and dividers

Foreground tones (lighter, softer):
- foreground: 85% - main text (was 98% harsh white)
- card-foreground: 85% - card text
- secondary-foreground: 80% - secondary text
- muted-foreground: 60% - muted text

Primary color:
- Changed from white to blue: oklch(70% 0.15 240)
- More pleasant, less harsh than pure white
- Better visual hierarchy

Benefits:
- Reduced eye strain with softer contrasts
- More professional, cohesive appearance
- Better visual hierarchy
- Maintains readability while being easier on the eyes
- Harmonized gray scale throughout the interface

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:26:43 +01:00
439c14db87 feat: show effect chain in collapsed state and polish visual integration
Visual improvements:
- Effects section background now matches waveform (slate-900/50)
- More subtle border (border-border/50)
- Compact header (py-1.5 instead of py-2)
- Expanded area has darker background (slate-900/30) for depth

Collapsed state improvements:
- When collapsed, shows mini effect chips in the header
- Each chip shows effect name with visual state:
  - Enabled: primary color with border (blue/accent)
  - Disabled: muted with border
- Chips are scrollable horizontally if many effects
- Falls back to "Devices (count)" when no effects

Better integration:
- Background colors tie to waveform area
- Subtle hover effect on header (accent/30)
- Visual hierarchy: waveform -> effects blend together
- No more "abandoned" feeling - feels like part of the track

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:23:20 +01:00
25be7027f3 refactor: make effects section always visible with inline collapse toggle
- Removed "Devices" button from control panel
- Effects section now always present below each track
- Collapsible via chevron in the effects header itself
- Click header to expand/collapse the device rack
- "+" button in header to add effects (with stopPropagation)
- Each track has independent collapse state
- Hover effect on header for better UX

This matches the typical DAW layout where each track has its own
device section that can be individually collapsed/expanded.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:16:41 +01:00
b9ffbf28ef feat: move effects section to collapsible area below track waveform
Major UX improvement inspired by Ableton Live's device view:

- Effects section now appears below the waveform in the track lane
- Toggle button in control panel shows/hides the effects area
- Gives much more horizontal space for device rack
- Effects section spans full width of track (control panel + waveform)
- Clean separation between track controls and effects
- Header with device count, add button, and close button

Layout changes:
- Track structure changed from horizontal to vertical flex
- Top row contains control panel + waveform (fixed height)
- Bottom section contains collapsible effects area (when shown)
- Effects hidden by default, toggled via "Devices" button
- When collapsed, track doesn't show effects section

This provides a cleaner workflow where users can focus on mixing
when effects are hidden, then expand to full device view when needed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:14:13 +01:00
a3fb8a564e fix: remove pan separator and make effects container full width
- Removed border-t border-border from devices section
- Added negative margin (-mx-3) and padding (px-3) to effects container
  to make it extend full width of the track control panel

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:11:20 +01:00
6bfc8d3cfe feat: implement horizontal device rack with collapsible effect cards
- Created EffectDevice component with expand/collapse state
- Each device shows name, type, enabled/disabled toggle, and remove button
- When expanded, devices show parameter details
- Replaced vertical effect list with horizontal scrolling rack
- Effects display in 192px wide cards with visual feedback
- Power button toggles effect enabled/disabled state
- Parameters display shown when expanded (controls coming soon)

This matches the Ableton Live/Bitwig workflow where effects are
arranged horizontally with collapsible UI for each device.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:09:49 +01:00
fa3588d619 fix: move useMemo before early return to fix React hooks order 2025-11-18 08:03:36 +01:00
cb396ddfd6 feat: add EffectBrowser dialog for adding effects to tracks
Created a beautiful effect browser dialog inspired by Ableton Live:

**EffectBrowser Component:**
- Modal dialog with search functionality
- Effects organized by category:
  - Dynamics (Compressor, Limiter, Gate)
  - Filters (Lowpass, Highpass, Bandpass, etc.)
  - Time-Based (Delay, Reverb, Chorus, Flanger, Phaser)
  - Distortion (Distortion, Bitcrusher)
  - Pitch & Time (Pitch Shifter, Time Stretch)
  - Utility (Normalize, Fade In/Out, Reverse)
- Grid layout with hover effects
- Real-time search filtering
- Click effect to add to track

**Integration:**
- "+" button in track strip opens EffectBrowser dialog
- Selecting an effect adds it to the track's effect chain
- Effects appear immediately in the Devices section
- Full enable/disable and remove functionality

**UX Flow:**
1. Click "+" in track Devices section
2. Browse or search for effect
3. Click effect to add it
4. Effect appears in Devices list
5. Toggle on/off with ●/○
6. Remove with × button

Effects are now fully manageable in the UI! Next: Apply them to audio.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 08:02:21 +01:00
25e843236d fix: add missing Plus icon import in Track component 2025-11-18 07:52:24 +01:00
4547446ced fix: improve devices section UX
- Remove border separator above Devices section for cleaner look
- Add "+" button next to Devices header to add new effects
- Button placement matches Ableton/Bitwig pattern
- TODO: Implement effect browser dialog when + is clicked

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:51:51 +01:00
f3f5b65e1e feat: implement Ableton Live-style DAW layout
Major UX refactor to match professional DAW workflows (Ableton/Bitwig):

**Layout Changes:**
- Removed sidebar completely
- Track actions moved to header toolbar (Add/Import/Clear All)
- Each track now shows its own devices/effects in the track strip
- Master section moved to bottom footer area
- Full-width waveform display

**Track Strip (left panel):**
- Track name (editable inline)
- Color indicator
- Collapse/Solo/Mute/Delete buttons
- Volume slider with percentage
- Pan slider with L/C/R indicator
- Collapsible "Devices" section showing track effects
  - Shows effect count in header
  - Each effect card shows: name, enable/disable toggle, remove button
  - Effects are colored based on enabled/disabled state
  - Click to expand/collapse devices section

**Master Section (bottom):**
- Transport controls (Play/Pause/Stop) with timeline
- Master volume control
- Master effects placeholder (to be implemented)

**Benefits:**
- True DAW experience like Ableton Live
- Each track is self-contained with its own effect chain
- No context switching between tabs
- Effects are always visible for each track
- More screen space for waveforms
- Professional mixer-style layout

Note: Effects are visible but not yet applied to audio - that's next!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:46:27 +01:00
a8f2391400 refactor: improve UX with 2-tab sidebar layout (Option A)
Changed from confusing 3-tab layout to cleaner 2-tab design inspired by
professional DAWs like Ableton Live:

**Tracks Tab:**
- Track management actions (Add/Import/Clear)
- Track count with selected track indicator
- Contextual help text
- Selected track's effects shown below (when track is selected)
- Clear visual separation with border-top

**Master Tab:**
- Master channel description
- Master effects chain
- Preset management for master effects
- Clear that these apply to final mix

Benefits:
- Clear separation: track operations vs master operations
- Contextual: selecting a track shows its effects in same tab
- Less cognitive load than 3 tabs
- Follows DAW conventions (track strip vs master strip)
- Selected track name shown in status area

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:41:16 +01:00
f640f2f9d4 feat: implement per-track and master effect chains (Option 3)
Architecture:
- Each track now has its own effect chain stored in track.effectChain
- Separate master effect chain for the final mix output
- SidePanel has 3 tabs: Tracks, Track FX, Master FX

Changes:
- types/track.ts: Add effectChain field to Track interface
- lib/audio/track-utils.ts: Initialize effect chain when creating tracks
- lib/hooks/useMultiTrack.ts: Exclude effectChain from localStorage, recreate on load
- components/editor/AudioEditor.tsx:
  - Add master effect chain state using useEffectChain hook
  - Add handlers for per-track effect chain manipulation
  - Pass both track and master effect chains to SidePanel
- components/layout/SidePanel.tsx:
  - Update to 3-tab interface (Tracks | Track FX | Master FX)
  - Track FX tab shows effects for currently selected track
  - Master FX tab shows master bus effects with preset management
  - Different icons for track vs master effects tabs

Note: Effect processing in Web Audio API not yet implemented.
This commit sets up the data structures and UI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:30:46 +01:00
2f8718626c feat: implement global volume and mute controls
- Add masterVolume state to AudioEditor (default 0.8)
- Pass masterVolume to useMultiTrackPlayer hook
- Create master gain node in audio graph
- Connect all tracks through master gain before destination
- Update master gain in real-time when volume changes
- Wire up PlaybackControls volume slider and mute button
- Clean up master gain node on unmount

Fixes global volume and mute controls not working in transport controls.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:20:29 +01:00
5817598c48 fix: playback position animation frame not updating
Fixed issue where currentTime wasn't updating during playback:
- Removed 'isPlaying' from updatePlaybackPosition dependencies
- This was causing the RAF loop to stop when state changed
- Now animation frame continues running throughout playback
- Playhead now updates smoothly in waveform and timeline slider

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:16:07 +01:00
c6b9cb9af6 fix: improve spacebar shortcut to ignore buttons
Updated spacebar handler to skip when focus is on:
- Input fields
- Textareas
- Buttons (HTMLButtonElement)
- Elements with role="button"

This prevents spacebar from triggering play/pause when clicking
playback control buttons or other UI controls.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:14:10 +01:00
a034ca7e85 feat: center transport controls and add spacebar play/pause
Changes:
- Transport controls now centered in footer with flexbox justify-center
- Added spacebar keyboard shortcut for play/pause toggle
- Spacebar only works when not typing in input/textarea fields
- Prevents default spacebar behavior (page scroll) when playing/pausing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:11:43 +01:00
e3582b7b9a feat: show waveform in collapsed state
Changes:
- Waveform canvas now displays even when track is collapsed
- Only the upload placeholder is hidden when collapsed
- Gives better visual overview when tracks are minimized
- Similar to DAWs like Ableton Live

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:09:29 +01:00
1404228239 feat: implement drag & drop for audio loading and decrease upload icon
Changes:
- Reduced upload icon size from h-8 w-8 to h-6 w-6 (smaller, cleaner)
- Implemented full drag & drop functionality for audio files
- Shows visual feedback when dragging: blue border, primary color highlight
- Changes text to "Drop audio file here" when dragging over
- Validates dropped file is audio type before processing
- Updated message from "coming soon" to active "or drag & drop"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:08:04 +01:00
889b2b91ae feat: add click-to-load audio on empty track waveform
Changes:
- Empty tracks now show upload icon and "Click to load audio file" message
- Clicking the placeholder opens native file dialog
- Automatically decodes audio file and updates track with AudioBuffer
- Auto-renames track to filename if track name is still default
- Hover effect with background color change for better UX
- Message about drag & drop coming soon

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:06:46 +01:00
8ffa8e8b81 fix: revert pan icon size back to h-3 w-3
Volume icon is h-3.5 w-3.5, pan icon is h-3 w-3 as requested.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:04:08 +01:00
d3ef961d31 fix: make volume and pan icons the same size
Changed both icons from h-3 w-3 to h-3.5 w-3.5 for consistent sizing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:03:24 +01:00
da5045e4f8 refactor: change pan icon to CircleArrowOutUpRight
Replaced Move icon with CircleArrowOutUpRight for better visual
representation of panning direction control.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:02:32 +01:00
8023369239 feat: add pan knob icon to Pan label
Added Move icon (crosshair/knob) to Pan label to match Volume's icon.
Now both controls have visual icons for better UI consistency.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 07:00:57 +01:00
b8d2cb9585 refactor: make volume and pan sliders inline with labels
Changed slider layout to horizontal:
- Label | Slider | Value all in one row
- Volume: "Volume" icon + label (64px) | slider (flex) | "80%" (40px)
- Pan: "Pan" label (64px) | slider (flex) | "C/L50/R50" (40px)
- Much more compact and professional looking
- Similar to professional DAW mixer strips

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 06:59:24 +01:00
401bab8886 refactor: move solo/mute to header as icon buttons and fix waveform height
Changes:
- Solo and Mute buttons now appear as icon buttons in the track header (next to trash)
- Removed redundant solo/mute button row from the expanded controls
- Cleaner, more compact layout
- Canvas now uses parent container's size for proper full-height rendering
- Added track.height to canvas effect dependencies

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 06:56:38 +01:00
8bd326a21b fix: align waveform height with control panel and fix border separator
Changes:
- Waveform now uses absolute positioning to fill the full height of the track
- Border separator only appears at the bottom of each section, not through the control panel
- Both control panel and waveform are now the same height
- Cleaner visual appearance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 06:53:44 +01:00
4381057f3f refactor: make track control panel full height and remove redundant buttons
Changes:
- Track control panel now uses flexbox (full height, no scrolling)
- Removed "Add Empty Track" and "Import Audio Files" buttons from main area
- All track management is now done via the sidebar only
- Cleaner, more professional DAW-style interface

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 06:52:05 +01:00
e376f3b0b4 feat: redesign track layout to DAW-style with left control panel
Major UX improvement inspired by Audacity/Ableton Live:

- Each track now has 2 sections: left control panel (fixed 288px) and right waveform (flexible)
- Left panel contains: track name, color indicator, collapse toggle, volume, pan, solo, mute, delete
- TrackHeader component functionality moved directly into Track component
- Removed redundant track controls from SidePanel
- SidePanel now simplified to show global actions and effect chain
- Track controls are always visible on the left, waveform scrolls horizontally on the right
- Collapsed tracks show only the header row (48px height)
- Much better UX for multi-track editing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 06:41:22 +01:00
f42e5d4556 fix: adjust volume slider max to 100% (was 200%)
Changed volume slider max from 2 to 1 (0-100% range).
Default volume remains at 0.8 (80%).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-18 06:35:35 +01:00
64902fa707 chore: remove debug console.log statements
Cleaned up debugging code now that track name rendering issue is resolved.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:51:42 +01:00
cf1358e051 fix: ensure track name is always a string in createTrack
Added type checking to prevent event objects from being used as track names.
When onClick handlers pass events, we now explicitly check for string type.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:50:43 +01:00
7c19c069bf fix: convert selectedTrack.name to string in Effect Chain header
Fixed [object Object] display in Effect Chain section of SidePanel
by adding String() conversion to selectedTrack.name.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:46:28 +01:00
53d436a174 fix: ensure track name is always converted to string in TrackHeader
Convert track.name to string in all state initializations and updates
to prevent '[object Object]' rendering issues.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:39:43 +01:00
6b540ef8fb fix: prevent localStorage circular reference in track serialization
Explicitly whitelist track fields when saving to localStorage to prevent
DOM element references (HTMLButtonElement with React fiber) from being
serialized. This fixes the circular structure JSON error.

Changes:
- Changed from spread operator exclusion to explicit field whitelisting
- Ensured track.name is always converted to string
- Only serialize Track interface fields that should be persisted

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:36:07 +01:00
fe519dab01 fix: convert track name to string in TrackHeader component
Fixed the React error by ensuring track.name is always converted to string in TrackHeader.tsx line 105 where it's rendered.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:33:33 +01:00
a2636bb3a9 fix: ensure track names are strings when loading from localStorage
Added string conversion and error handling when loading tracks from localStorage to prevent corrupted data from causing React rendering errors. If localStorage data is corrupted, it will be cleared automatically.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:30:45 +01:00
fda41fa0e8 fix: convert track name to string to prevent React error
Added String() conversion to track.name display to ensure it's always rendered as text, preventing "Objects are not valid as a React child" error.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:25:54 +01:00
68760bb28f fix: prevent event bubbling on sidebar track controls
Added stopPropagation to track controls container to prevent click events on sliders and buttons from bubbling up to the track selection handler.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:23:24 +01:00
55e67bc187 fix: add 'tracks' category to CommandAction type
Added 'tracks' to the allowed command palette categories to fix TypeScript compilation error.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:20:19 +01:00
de8a3ff187 feat: refactor to multi-track only editor with sidebar controls
Major refactor to simplify the editor and focus exclusively on multi-track editing:

**AudioEditor Changes:**
- Removed single-file waveform view and useAudioPlayer
- Removed all single-file editing operations (cut, copy, paste, trim)
- Simplified to multi-track only with track selection support
- Added selectedTrackId state for track-specific effect chain
- Reduced from ~1500 lines to ~290 lines

**SidePanel Changes:**
- Complete rewrite with 2 tabs: Tracks and Effect Chain
- Tracks tab shows all tracks with inline controls (volume, pan, solo, mute)
- Click tracks to select/deselect
- Effect Chain tab shows effects for selected track
- Removed old file/history/info/effects tabs
- Sidebar now wider (320px) to accommodate inline track controls

**TrackList/Track Changes:**
- Added track selection support (isSelected/onSelect props)
- Visual feedback with ring border when track is selected
- Click anywhere on track to select it

**Workflow:**
1. Import or add audio tracks
2. Select a track in the sidebar or main view
3. Apply effects to selected track via Effect Chain tab
4. Adjust track controls (volume, pan, solo, mute) in Tracks tab

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:17:09 +01:00
4735b5fb00 fix: resolve circular reference error in localStorage
Fixed the "Converting circular structure to JSON" error in useMultiTrack by properly destructuring audioBuffer from track objects before serializing to localStorage.

Changed from spreading the entire track object (which could have circular refs) to explicitly excluding audioBuffer using destructuring.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:04:13 +01:00
44a2d6be3d chore: remove multi-track demo page
Removed /multi-track-demo route as multi-track functionality is now fully integrated into the main AudioEditor view.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 22:00:51 +01:00
832a18dd9c feat: integrate multi-track functionality into main AudioEditor
Added comprehensive multi-track support to the main application:
- Added "Tracks" tab to SidePanel with track management controls
- Integrated useMultiTrack and useMultiTrackPlayer hooks into AudioEditor
- Added view mode switching between waveform and tracks views
- Implemented "Convert to Track" to convert current audio buffer to track
- Added TrackList view with multi-track playback controls
- Wired up ImportTrackDialog for importing multiple audio files

Users can now:
- Click "Tracks" tab in side panel to access multi-track mode
- Convert current audio to a track
- Import multiple audio files as tracks
- View and manage tracks in dedicated TrackList view
- Play multiple tracks simultaneously with individual controls

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 21:57:31 +01:00
83127b3116 feat: add multi-track audio import functionality
Implemented comprehensive multi-file audio import system:
- Created ImportTrackDialog component with drag-and-drop and file browser support
- Updated TrackList to integrate import functionality with dedicated buttons
- Added multi-track-demo page to test and demonstrate import features
- Sequential file processing with automatic track naming from filenames
- Error handling for non-audio files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 21:28:38 +01:00
35514cc685 docs: update PLAN.md for Phase 7 progress
Marked Phase 7.1, 7.2, and 7.3 as complete:

**Phase 7.1 - Track Management:** ✓
- Track creation/removal
- Track naming (inline editing)
- Track colors (9 presets)
- Track reordering support

**Phase 7.2 - Track Controls:** ✓
- Solo/Mute per track
- Volume fader (0-100%)
- Pan control (L-C-R)
- Record enable (UI ready)
- Track height adjustment
- Collapse/expand

**Phase 7.3 - Multi-Track Playback:** ✓
- Real-time multi-track mixing
- Synchronized playback
- Per-track gain and pan
- Solo/Mute handling

**Working Features Added:**
- 12 new multi-track features fully functional
- Complete track management system
- Professional mixing capabilities

**Current Status:** Phase 7 In Progress
**Next:** Integration with AudioEditor UI (Phase 7.4)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 21:04:53 +01:00
d566a86c58 feat: implement multi-track playback system (Phase 7.3)
Added real-time multi-track audio mixing and playback:

**useMultiTrackPlayer Hook:**
- Real-time multi-track audio mixing with Web Audio API
- Synchronized playback across all tracks
- Dynamic gain control respecting solo/mute states
- Per-track panning with constant power panning
- Seek functionality with automatic resume
- Playback position tracking with requestAnimationFrame
- Automatic duration calculation from longest track
- Clean resource management and cleanup

**Features:**
-  Play/Pause/Stop controls for multi-track
-  Solo/Mute handling (if any track is soloed, only soloed tracks play)
-  Per-track volume control (0-1 range)
-  Per-track pan control (-1 left to +1 right)
-  Real-time parameter updates during playback
-  Seamless seek with playback state preservation
-  Automatic stop when reaching end of longest track

**Audio Graph Architecture:**
For each track: BufferSource → GainNode → StereoPannerNode → Destination

The mixer applies:
- Volume attenuation based on track volume setting
- Solo/Mute logic (getTrackGain utility)
- Constant power panning for smooth stereo positioning

Next: Integrate multi-track UI into AudioEditor

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 21:03:39 +01:00
3c950eeba7 feat: implement Phase 7.1-7.2 multi-track infrastructure
Added core multi-track support with track management and controls:

**Track Types & Utilities:**
- Track interface with audio buffer, controls (volume/pan/solo/mute)
- Track utility functions for creation, mixing, and gain calculation
- Track color system with 9 preset colors
- Configurable track heights (60-300px)

**Components:**
- TrackHeader: Collapsible track controls with inline name editing
  - Solo/Mute buttons with visual feedback
  - Volume slider (0-100%) and Pan control (L-C-R)
  - Track color indicator and remove button
- Track: Waveform display component with canvas rendering
  - Click-to-seek on waveform
  - Playhead visualization
  - Support for collapsed state
- TrackList: Container managing multiple tracks
  - Scrollable track list with custom scrollbar
  - Add track button
  - Empty state UI

**State Management:**
- useMultiTrack hook with localStorage persistence
- Add/remove/update/reorder track operations
- Track buffer management

Features implemented:
-  Track creation and removal
-  Track naming (editable)
-  Track colors
-  Solo/Mute per track
-  Volume fader per track (0-100%)
-  Pan control per track (L-C-R)
-  Track collapse/expand
-  Track height configuration
-  Waveform visualization per track
-  Multi-track audio mixing utilities

Next: Integrate into AudioEditor and implement multi-track playback

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:59:36 +01:00
39624ca9cf fix: ensure canvas renders when dialog opens
Fixed canvas visualization not painting sometimes by:
- Adding `open` prop check before rendering
- Adding `open` to useEffect dependencies
- Adding dimension validation for dynamic canvas sizes
- Ensures canvas properly renders when dialog becomes visible

Affected dialogs: DynamicsParameterDialog, TimeBasedParameterDialog,
AdvancedParameterDialog

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:53:59 +01:00
2fc1620495 fix: add nullish coalescing to parameter dialogs
Fixed "Cannot read properties of undefined (reading 'toFixed')" errors
in TimeBasedParameterDialog and DynamicsParameterDialog by adding
nullish coalescing operators with default values to all parameter
accesses. This prevents errors when loading presets that have partial
parameter sets.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:50:48 +01:00
8a0bd46593 fix: correct FilterOptions import in effect chain
Fixed TypeScript build error by importing FilterOptions instead of
non-existent FilterParameters from filters module.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:34:38 +01:00
0986896756 feat: add Phase 6.6 Effect Chain Management system
Effect Chain System:
- Create comprehensive effect chain types and state management
- Implement EffectRack component with drag-and-drop reordering
- Add enable/disable toggle for individual effects
- Build PresetManager for save/load/import/export functionality
- Create useEffectChain hook with localStorage persistence

UI Integration:
- Add Chain tab to SidePanel with effect rack
- Integrate preset manager dialog
- Add chain management controls (clear, presets)
- Update SidePanel with chain props and handlers

Features:
- Drag-and-drop effect reordering with visual feedback
- Effect bypass/enable toggle with power icons
- Save effect chains as presets with descriptions
- Import/export presets as JSON files
- localStorage persistence for chains and presets
- Visual status indicators for enabled/disabled effects
- Preset timestamp and effect count display

Components Created:
- /lib/audio/effects/chain.ts - Effect chain types and utilities
- /components/effects/EffectRack.tsx - Visual effect chain component
- /components/effects/PresetManager.tsx - Preset management dialog
- /lib/hooks/useEffectChain.ts - Effect chain state hook

Updated PLAN.md:
- Mark Phase 6.6 as complete
- Update current status to Phase 6.6 Complete
- Add effect chain features to working features list
- Update Next Steps to show Phase 6 complete

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:27:08 +01:00
bc4e75167f fix: resolve TypeScript compilation errors in advanced effects
- Convert if/else chains to proper type narrowing patterns
- Use type assertions to avoid discriminated union issues
- Simplify EffectPreset and DEFAULT_PARAMS types using any
- Fix TypeScript strict mode compilation errors
- Reorder handler logic to avoid type narrowing conflicts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:10:47 +01:00
ee48f9475f feat: add advanced audio effects and improve UI
Phase 6.5 Advanced Effects:
- Add Pitch Shifter with semitones and cents adjustment
- Add Time Stretch with pitch preservation using overlap-add
- Add Distortion with soft/hard/tube types and tone control
- Add Bitcrusher with bit depth and sample rate reduction
- Add AdvancedParameterDialog with real-time waveform visualization
- Add 4 professional presets per effect type

Improvements:
- Fix undefined parameter errors by adding nullish coalescing operators
- Add global custom scrollbar styling with color-mix transparency
- Add custom-scrollbar utility class for side panel
- Improve theme-aware scrollbar appearance in light/dark modes
- Fix parameter initialization when switching effect types

Integration:
- All advanced effects support undo/redo via EffectCommand
- Effects accessible via command palette and side panel
- Selection-based processing support
- Toast notifications for all effects

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 20:03:40 +01:00
f414573655 fix: seeking backwards no longer jumps to frame 0
Fixed critical bug where seeking backwards during playback would
jump to position 0 instead of the desired position.

Root cause:
- When stop() was called on a playing source node, it triggered
  the onended event callback
- The onended callback would set pauseTime = 0 (thinking playback
  naturally ended)
- This interfered with the seek operation which was trying to set
  a new position

Solution:
- Clear sourceNode.onended callback BEFORE calling stop()
- This prevents the ended event from firing when we manually stop
- Seeking now works correctly in all directions

The fix ensures clean state transitions during seeking operations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:32:50 +01:00
10d2921147 fix: proper seeking behavior with optional auto-play
Complete rewrite of seeking logic to support both scrubbing and
click-to-play functionality with proper state management.

Changes:
1. Added autoPlay parameter to seek() methods across the stack
2. Waveform behavior:
   - Click and drag → scrubs WITHOUT auto-play during drag
   - Mouse up after drag → auto-plays from release position
   - This allows smooth scrubbing while dragging
3. Timeline slider behavior:
   - onChange → seeks WITHOUT auto-play (smooth dragging)
   - onMouseUp/onTouchEnd → auto-plays from final position
4. Transport button state now correctly syncs with playback state

Technical implementation:
- player.seek(time, autoPlay) - autoPlay defaults to false
- If autoPlay=true OR was already playing → continues playback
- If autoPlay=false AND wasn't playing → just seeks (isPaused=true)
- useAudioPlayer.seek() now reads actual player state after seeking

User experience:
✓ Click on waveform → music plays from that position
✓ Drag on waveform → scrubs smoothly, plays on release
✓ Drag timeline slider → scrubs smoothly, plays on release
✓ Transport buttons show correct state (Play/Pause)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:30:11 +01:00
9aac873b53 fix: update transport button state when seeking auto-starts playback
Fixed issue where Play/Pause buttons didn't update after clicking on
waveform or timeline slider.

Problem:
- seek() in useAudioPlayer hook only updated currentTime
- But player.seek() now auto-starts playback
- React state (isPlaying, isPaused) wasn't updated
- Transport buttons showed wrong state (Play instead of Pause)

Solution:
- Update isPlaying = true and isPaused = false in seek callback
- Now transport buttons correctly show Pause icon when seeking
- UI state matches actual playback state

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:27:12 +01:00
19bf7dc68a fix: seek now auto-starts playback for intuitive interaction
Changed seek behavior to match user expectations:
- Clicking on waveform now immediately starts playing from that position
- Dragging timeline slider starts playback at the selected position
- No need to click Play button after seeking

Previous behavior:
- seek() only continued playback if music was already playing
- Otherwise it just set isPaused = true
- User had to seek AND THEN click play button

New behavior:
- seek() always starts playback automatically
- Click anywhere → music plays from that position
- Much more intuitive UX matching common audio editors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:24:31 +01:00
958c6d6680 fix: playback starts from seeked position instead of beginning
Fixed critical bug where playback always started from 0:00 regardless
of seek position or pause state.

Root cause:
- play() method called stop() which reset pauseTime to 0
- Then it checked isPaused (now false) and pauseTime (now 0)
- Result: always played from beginning

Solution:
- Calculate offset BEFORE calling stop()
- Preserve the paused position or start offset
- Then stop and recreate source node with correct offset

Now playback correctly starts from:
- Seeked position (waveform click or timeline slider)
- Paused position (when resuming after pause)
- Specified start offset (when provided)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:21:10 +01:00
d1ff709400 fix: improve info box readability in EditControls and HistoryControls
Enhanced info boxes with better contrast and visibility:
- Increased background opacity from /10 to /90 for blue info boxes
- Changed to thicker border (border-2) for better definition
- Changed text color to white for better contrast on blue background
- Applied consistent styling across:
  * Selection Active info box (EditControls)
  * History Available info box (HistoryControls)

Info boxes are now clearly readable in both light and dark modes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:16:24 +01:00
cbd9ad03fc fix: improve toast notification readability
Enhanced toast notifications with better contrast and visibility:
- Increased background opacity from /10 to /90 for all variants
- Changed to thicker border (border-2) for better definition
- Added backdrop-blur-sm for depth and clarity
- Improved text contrast:
  * Success/Error/Info: White text on colored backgrounds
  * Warning: Black text on yellow background
  * Default: Uses theme foreground color
- Enhanced shadow (shadow-2xl) for better separation
- Added aria-label to close button for accessibility

Toast notifications are now clearly readable in both light and dark modes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:15:20 +01:00
0d43d73014 fix: cleanup round - full width layout and paste operation bug
Fixed two issues:
1. Made audio editor full width by removing max-w-6xl wrapper
2. Fixed insertBufferSegment bug where paste operation had incorrect
   target buffer indexing causing corrupted audio after paste

Changes:
- app/page.tsx: Removed max-w-6xl constraint for full-width editor
- lib/audio/buffer-utils.ts: Fixed paste buffer copying logic
  * Corrected target index calculation in "copy after insert point" loop
  * Was: targetData[insertBuffer.length + i] = sourceData[i]
  * Now: targetData[insertSample + insertBuffer.length + (i - insertSample)] = sourceData[i]

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:13:25 +01:00
d6a52ee477 docs: update PLAN.md to reflect Phase 5 completion
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:08:58 +01:00
159da29082 feat: implement Phase 5 - undo/redo system with command pattern
Added comprehensive undo/redo functionality:
- Command pattern interface and base classes
- HistoryManager with 50-operation stack
- EditCommand for all edit operations (cut, delete, paste, trim)
- Full keyboard shortcuts (Ctrl+Z undo, Ctrl+Y/Ctrl+Shift+Z redo)
- HistoryControls UI component with visual feedback
- Integrated history system with all edit operations
- Toast notifications for undo/redo actions
- History state tracking and display

New files:
- lib/history/command.ts - Command interface and BaseCommand
- lib/history/history-manager.ts - HistoryManager class
- lib/history/commands/edit-command.ts - EditCommand and factory functions
- lib/hooks/useHistory.ts - React hook for history management
- components/editor/HistoryControls.tsx - History UI component

Modified files:
- components/editor/AudioEditor.tsx - Integrated history system
- components/editor/EditControls.tsx - Updated keyboard shortcuts display

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:08:31 +01:00
55009593f7 docs: clarify Phase 4 accomplished vs future features
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 17:05:50 +01:00
bf8d25aa1d docs: update PLAN.md to reflect Phase 4 completion
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:51:38 +01:00
ed9ac0b24f feat: implement Phase 4 - selection and editing features
Added comprehensive audio editing capabilities:
- Region selection with Shift+drag on waveform
- Visual selection feedback with blue overlay and borders
- AudioBuffer manipulation utilities (cut, copy, paste, delete, trim)
- EditControls UI component with edit buttons
- Keyboard shortcuts (Ctrl+A, Ctrl+X, Ctrl+C, Ctrl+V, Delete, Escape)
- Clipboard management for cut/copy/paste operations
- Updated useAudioPlayer hook with loadBuffer method

New files:
- types/selection.ts - Selection and ClipboardData interfaces
- lib/audio/buffer-utils.ts - AudioBuffer manipulation utilities
- components/editor/EditControls.tsx - Edit controls UI

Modified files:
- components/editor/Waveform.tsx - Added selection support
- components/editor/AudioEditor.tsx - Integrated edit operations
- lib/hooks/useAudioPlayer.ts - Added loadBuffer method

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-17 15:50:42 +01:00
107 changed files with 25221 additions and 645 deletions

759
PLAN.md
View File

@@ -2,33 +2,172 @@
## Progress Overview ## Progress Overview
**Current Status**: Phase 3 Complete **Current Status**: Phase 14 Complete (Settings & Preferences: Global settings with localStorage persistence) - Ready for Phase 15
### Completed Phases ### Completed Phases
-**Phase 1**: Project Setup & Core Infrastructure (95% complete) -**Phase 1**: Project Setup & Core Infrastructure (95% complete)
-**Phase 2**: Audio Engine Foundation (90% complete) -**Phase 2**: Audio Engine Foundation (90% complete)
-**Phase 3**: Waveform Visualization (95% complete) -**Phase 3**: Waveform Visualization (95% complete)
-**Phase 4**: Selection & Editing (70% complete - core editing features done)
-**Phase 5**: Undo/Redo System (100% complete)
-**UI Redesign**: Professional Audacity-style layout (100% complete)
### Working Features ### Working Features
**Core Features:**
- ✅ Audio file upload with drag-and-drop - ✅ Audio file upload with drag-and-drop
- ✅ Waveform visualization with real-time progress - ✅ Waveform visualization with real-time progress
- ✅ Playback controls (play, pause, stop, seek) - ✅ Playback controls (play, pause, stop, seek)
- ✅ Volume control with mute - ✅ Volume control with mute
- ✅ Timeline scrubbing - ✅ Timeline scrubbing (click-to-play, drag-to-scrub)
-**Drag-to-scrub audio** (NEW!) -Horizontal zoom (1x-20x)
-**Horizontal zoom (1x-20x)** (NEW!) -Vertical amplitude zoom
-**Vertical amplitude zoom** (NEW!) -Scroll through zoomed waveform
-**Scroll through zoomed waveform** (NEW!) -Grid lines every second
-**Grid lines every second** (NEW!) -Viewport culling for performance
-**Viewport culling for performance** (NEW!)
- ✅ Dark/light theme support - ✅ Dark/light theme support
- ✅ Toast notifications - ✅ Toast notifications
- ✅ File metadata display - ✅ File metadata display
**Editing Features:**
- ✅ Region selection with Shift+drag
- ✅ Visual selection feedback
- ✅ Cut/Copy/Paste operations
- ✅ Delete and Trim operations
- ✅ Keyboard shortcuts (Ctrl+A/X/C/V, Delete, Escape)
- ✅ Clipboard management
- ✅ Undo/Redo system with command pattern
- ✅ History tracking (50 operations)
- ✅ Undo/Redo keyboard shortcuts (Ctrl+Z, Ctrl+Y)
**Audio Effects:**
- ✅ Normalize (peak amplitude)
- ✅ Fade In (linear/exponential/logarithmic curves)
- ✅ Fade Out (linear/exponential/logarithmic curves)
- ✅ Reverse audio
- ✅ Low-Pass Filter (removes high frequencies)
- ✅ High-Pass Filter (removes low frequencies)
- ✅ Band-Pass Filter (isolates frequency range)
- ✅ Compressor (with visual transfer curve and presets)
- ✅ Limiter (brick-wall limiting with makeup gain)
- ✅ Gate/Expander (noise reduction and dynamics expansion)
- ✅ Delay/Echo (time, feedback, mix with visual pattern)
- ✅ Reverb (Schroeder algorithm with room size and damping)
- ✅ Chorus (LFO modulation with depth, rate controls)
- ✅ Flanger (short modulated delay with feedback)
- ✅ Phaser (allpass filters with LFO modulation)
- ✅ Pitch Shifter (semitones and cents adjustment)
- ✅ Time Stretch (change duration with/without pitch preservation)
- ✅ Distortion (soft/hard/tube overdrive with tone control)
- ✅ Bitcrusher (bit depth and sample rate reduction)
- ✅ All effects support undo/redo
- ✅ Effects accessible via command palette and side panel
- ✅ Parameterized effects with real-time visual feedback
**Effect Chain Management:**
- ✅ Effect rack with drag-and-drop reordering
- ✅ Enable/disable individual effects (bypass)
- ✅ Save and load effect chain presets
- ✅ Import/export presets as JSON files
- ✅ Chain tab in side panel
- ✅ localStorage persistence for chains and presets
- ✅ Visual effect rack with status indicators
**Professional UI:**
- ✅ Command Palette (Ctrl+K) with searchable actions
- ✅ Compact header (Logo + Command Palette + Theme Toggle)
- ✅ Collapsible side panel with tabs (File, Chain, Effects, History, Info)
- ✅ Full-screen waveform canvas layout
- ✅ Integrated playback controls at bottom
- ✅ Keyboard-driven workflow
**Multi-Track Features (Phase 7 - Complete):**
- ✅ Track creation and removal
- ✅ Track naming with inline editing
- ✅ Track colors (9 preset colors)
- ✅ Solo/Mute per track with visual feedback
- ✅ Volume fader per track (0-100%)
- ✅ Pan control per track (L-C-R)
- ✅ Track collapse/expand
- ✅ Waveform visualization per track
- ✅ Real-time multi-track audio mixing
- ✅ Synchronized playback across all tracks
- ✅ Per-track gain and pan during playback
- ✅ Solo/Mute handling during playback
- ✅ Per-track effect chains with device rack
- ✅ Collapsible effects section below each track (192px height)
- ✅ Effect browser with categorized effects
- ✅ Horizontal scrolling device rack (Ableton-style)
- ✅ Individual effect cards with side-folding design (40px collapsed, 384px+ expanded)
- ✅ Real-time parameter controls for all effects (filters, dynamics, time-based, advanced)
- ✅ Inline parameter editing with sliders and controls (multi-column grid layout)
- ✅ Real-time effect processing during playback with Web Audio API nodes
- ✅ Effect bypass functionality (disable/enable effects in real-time)
- ✅ Supported real-time effects: All filters, compressor, limiter, gate, delay
- 🔲 Advanced real-time effects: Reverb, chorus, flanger, phaser, distortion (TODO: Complex node graphs)
- 🔲 Master channel effects (TODO: Implement master effect chain UI similar to per-track effects)
**Recording Features (Phase 8 - Complete):**
- ✅ Microphone permission request
- ✅ Audio input device selection
- ✅ Input level meter with professional dB scale
- ✅ Real-time input monitoring
- ✅ Per-track record arming
- ✅ Global record button
- ✅ Recording indicator with pulse animation
- ✅ Punch-in/Punch-out controls (time-based recording region)
- ✅ Overdub mode (layer recordings by mixing audio)
- ✅ Input gain control (0.0-2.0 with dB display, adjustable in real-time)
- ✅ Mono/Stereo recording selection
- ✅ Sample rate matching (44.1kHz, 48kHz, 96kHz)
- ✅ Recording settings panel shown when track is armed
**Analysis Tools (Phase 10 - Complete):**
- ✅ Frequency Analyzer with real-time FFT display
- ✅ Spectrogram with time-frequency waterfall visualization
- ✅ Phase Correlation Meter (stereo phase analysis)
- ✅ LUFS Loudness Meter (momentary/short-term/integrated)
- ✅ Audio Statistics Panel (project info and levels)
- ✅ Color-coded heat map (blue → cyan → green → yellow → red)
- ✅ Toggle between 5 analyzer views (FFT/SPEC/PHS/LUFS/INFO)
- ✅ Theme-aware backgrounds (light/dark mode support)
- ✅ Peak and RMS meters (master and per-track)
- ✅ Clip indicator with reset (master only)
**Export Features (Phase 11.1, 11.2 & 11.3 - Complete):**
- ✅ WAV export (16/24/32-bit PCM or float)
- ✅ MP3 export with lamejs (128/192/256/320 kbps)
- ✅ FLAC export with fflate compression (quality 0-9)
- ✅ Format selector dropdown with dynamic options
- ✅ Normalization option (1% headroom)
- ✅ Export scope selector:
- Entire Project (mix all tracks)
- Selected Region (extract and mix selection)
- Individual Tracks (separate files with sanitized names)
- ✅ Export dialog with format-specific settings
- ✅ Dynamic file extension display
- ✅ Smart selection detection (disable option when no selection)
**Settings & Preferences (Phase 14 - Complete):**
- ✅ Global settings dialog with 5 tabs (Recording, Audio, Editor, Interface, Performance)
- ✅ localStorage persistence with default merging
- ✅ Audio settings: buffer size, sample rate (applied to recording), auto-normalize
- ✅ UI settings: theme, font size, default track height (applied to new tracks)
- ✅ Editor settings: auto-save interval, undo limit, snap-to-grid, grid resolution, default zoom (applied)
- ✅ Performance settings: peak quality, waveform quality, spectrogram toggle (applied), max file size
- ✅ Category-specific reset buttons
- ✅ Real-time application to editor behavior
### Next Steps ### Next Steps
- Selection & region editing (Phase 4) - **Phase 6**: Audio effects ✅ COMPLETE (Basic + Filters + Dynamics + Time-Based + Advanced + Chain Management)
- Undo/redo system (Phase 5) - **Phase 7**: Multi-track editing ✅ COMPLETE (Multi-track playback, effects, selection/editing)
- Audio effects (Phase 6) - **Phase 8**: Recording functionality ✅ COMPLETE (Audio input, controls, settings with overdub/punch)
- **Phase 9**: Automation ✅ COMPLETE (Volume/Pan automation with write/touch/latch modes)
- **Phase 10**: Analysis Tools ✅ COMPLETE (FFT, Spectrogram, Phase Correlation, LUFS, Audio Statistics)
- **Phase 11**: Export & Import ✅ COMPLETE (Full export/import with all formats, settings, scope options & conversions)
- **Phase 12**: Project Management ✅ COMPLETE (IndexedDB storage, auto-save, project export/import)
- **Phase 13**: Keyboard Shortcuts ✅ COMPLETE (Full suite of shortcuts for navigation, editing, and view control)
- **Phase 14**: Settings & Preferences ✅ COMPLETE (Global settings with localStorage persistence and live application)
--- ---
@@ -309,16 +448,35 @@ audio-ui/
- [x] Viewport culling (render only visible region) - [x] Viewport culling (render only visible region)
- [ ] Web Worker for peak calculation - [ ] Web Worker for peak calculation
### Phase 4: Selection & Editing ### Phase 4: Selection & Editing (70% Complete)
**✅ Accomplished:**
- Region selection with Shift+drag on waveform
- Visual selection feedback (blue overlay with borders)
- Full keyboard shortcuts (Ctrl+A, Ctrl+X/C/V, Delete, Escape)
- Core edit operations (Cut, Copy, Paste, Delete, Trim)
- Internal clipboard management with ClipboardData
- AudioBuffer manipulation utilities (extract, delete, insert, trim, concatenate)
- EditControls UI component with selection info display
**📋 Future Features (Optional/Advanced):**
- Keyboard-based selection with Shift+Arrow keys
- Named and color-coded region markers
- Region list panel with marker management
- Loop region functionality
- Split at cursor
- Duplicate selection
- Multiple clipboard slots
- Clipboard preview
#### 4.1 Region Selection #### 4.1 Region Selection
- [ ] Click-and-drag selection - [x] Click-and-drag selection (Shift+drag)
- [ ] Keyboard-based selection (Shift+Arrow) - [ ] Keyboard-based selection (Shift+Arrow) - FUTURE
- [ ] Select all (Ctrl+A) - [x] Select all (Ctrl+A)
- [ ] Clear selection (Escape) - [x] Clear selection (Escape)
- [ ] Selection visual feedback - [x] Selection visual feedback
#### 4.2 Region Markers #### 4.2 Region Markers - FUTURE
- [ ] Add/Remove region markers - [ ] Add/Remove region markers
- [ ] Named regions - [ ] Named regions
- [ ] Color-coded regions - [ ] Color-coded regions
@@ -326,317 +484,428 @@ audio-ui/
- [ ] Loop region - [ ] Loop region
#### 4.3 Basic Edit Operations #### 4.3 Basic Edit Operations
- [ ] Cut (Ctrl+X) - [x] Cut (Ctrl+X)
- [ ] Copy (Ctrl+C) - [x] Copy (Ctrl+C)
- [ ] Paste (Ctrl+V) - [x] Paste (Ctrl+V)
- [ ] Delete (Delete key) - [x] Delete (Delete key)
- [ ] Trim to selection - [x] Trim to selection
- [ ] Split at cursor - [ ] Split at cursor - FUTURE
- [ ] Duplicate selection - [ ] Duplicate selection - FUTURE
#### 4.4 Clipboard Management #### 4.4 Clipboard Management
- [ ] Internal clipboard for audio data - [x] Internal clipboard for audio data
- [ ] Multiple clipboard slots (optional) - [ ] Multiple clipboard slots (optional) - FUTURE
- [ ] Clipboard preview - [ ] Clipboard preview - FUTURE
### Phase 5: Undo/Redo System ### Phase 5: Undo/Redo System (100% Complete)
**✅ Accomplished:**
- Command pattern with interface and base classes
- HistoryManager with 50-operation stack
- EditCommand for cut, delete, paste, and trim operations
- Full keyboard shortcuts (Ctrl+Z undo, Ctrl+Y/Ctrl+Shift+Z redo)
- HistoryControls UI component with visual state display
- Toast notifications for undo/redo actions
- History state tracking (canUndo, canRedo, descriptions)
- useHistory React hook for state management
- Integration with all edit operations
- Clear history functionality
**📋 Future Features (Advanced):**
- EffectCommand for audio effects (Phase 6)
- VolumeCommand for gain changes (Phase 6)
- SplitCommand for region splitting (Phase 4 future)
- Configurable history limit in settings
- History panel with full operation list
#### 5.1 Command Pattern #### 5.1 Command Pattern
- [ ] Command interface - [x] Command interface
- [ ] Execute/Undo/Redo methods - [x] Execute/Undo/Redo methods
- [ ] Command factory - [x] Command factory (createCutCommand, createDeleteCommand, etc.)
#### 5.2 History Manager #### 5.2 History Manager
- [ ] History stack (50-100 operations) - [x] History stack (50 operations)
- [ ] Undo (Ctrl+Z) - [x] Undo (Ctrl+Z)
- [ ] Redo (Ctrl+Y or Ctrl+Shift+Z) - [x] Redo (Ctrl+Y or Ctrl+Shift+Z)
- [ ] Clear history - [x] Clear history
- [ ] History limit configuration - [x] History limit configuration
#### 5.3 Specific Commands #### 5.3 Specific Commands
- [ ] EditCommand (cut/paste/delete) - [x] EditCommand (cut/paste/delete/trim)
- [ ] EffectCommand (apply effects) - [ ] EffectCommand (apply effects) - PHASE 6
- [ ] VolumeCommand (gain changes) - [ ] VolumeCommand (gain changes) - PHASE 6
- [ ] SplitCommand (split regions) - [ ] SplitCommand (split regions) - FUTURE
### Phase 6: Audio Effects ### Phase 6: Audio Effects
#### 6.1 Basic Effects #### 6.1 Basic Effects (✅ Complete)
- [ ] Gain/Volume adjustment - [x] Gain/Volume adjustment
- [ ] Pan (stereo positioning) - [ ] Pan (stereo positioning) - FUTURE
- [ ] Fade In/Fade Out (linear/exponential/logarithmic) - [x] Fade In/Fade Out (linear/exponential/logarithmic)
- [ ] Normalize (peak/RMS) - [x] Normalize (peak/RMS)
- [ ] Reverse - [x] Reverse
- [ ] Silence generator - [ ] Silence generator - FUTURE
- [x] EffectCommand for undo/redo integration
- [x] Effects added to command palette
- [x] Toast notifications for effects
#### 6.2 Filters & EQ #### 6.2 Filters & EQ (Partially Complete)
- [ ] Parametric EQ (3-band, 10-band, 31-band) - [ ] Parametric EQ (3-band, 10-band, 31-band) - FUTURE
- [ ] Low-pass filter - [x] Low-pass filter (1000Hz cutoff, configurable)
- [ ] High-pass filter - [x] High-pass filter (100Hz cutoff, configurable)
- [ ] Band-pass filter - [x] Band-pass filter (1000Hz center, configurable)
- [ ] Notch filter - [x] Notch filter (implemented in filters.ts)
- [ ] Shelf filters (low/high) - [x] Shelf filters (low/high) (implemented in filters.ts)
- [ ] Visual EQ curve editor - [x] Peaking EQ filter (implemented in filters.ts)
- [ ] Visual EQ curve editor - FUTURE
- [x] Filters integrated with undo/redo system
- [x] Filters added to command palette
#### 6.3 Dynamics Processing #### 6.3 Dynamics Processing (✅ Complete)
- [ ] Compressor (threshold, ratio, attack, release, knee) - [x] Compressor (threshold, ratio, attack, release, knee, makeup gain)
- [ ] Limiter - [x] Limiter (threshold, attack, release, makeup gain)
- [ ] Gate/Expander - [x] Gate/Expander (threshold, ratio, attack, release, knee)
- [ ] Visual gain reduction meter - [x] Visual transfer curve showing input/output relationship
- [x] Professional presets for each effect type
- [x] Real-time parameter visualization
- [x] EffectCommand integration for undo/redo
- [x] Effects added to command palette and side panel
- [x] Selection-based processing support
- [ ] Visual gain reduction meter (realtime metering - FUTURE)
#### 6.4 Time-Based Effects #### 6.4 Time-Based Effects
- [ ] Delay/Echo (time, feedback, mix) - [x] Delay/Echo (time, feedback, mix)
- [ ] Reverb (Convolution Reverb with IR files) - [x] Reverb (Schroeder algorithmic reverb with room size and damping)
- [ ] Chorus (depth, rate, delay) - [x] Chorus (depth, rate, delay with LFO modulation)
- [ ] Flanger - [x] Flanger (short modulated delay with feedback)
- [ ] Phaser - [x] Phaser (cascaded allpass filters with LFO)
- [x] TimeBasedParameterDialog component with visual feedback
- [x] Integration with command palette and side panel
- [x] 4 presets per effect type
- [x] Undo/redo support for all time-based effects
#### 6.5 Advanced Effects #### 6.5 Advanced Effects
- [ ] Pitch shifter (semitones, cents) - [x] Pitch shifter (semitones, cents with linear interpolation)
- [ ] Time stretcher (without pitch change) - [x] Time stretcher (with and without pitch preservation using overlap-add)
- [ ] Distortion/Overdrive - [x] Distortion/Overdrive (soft/hard/tube types with tone and output control)
- [ ] Bitcrusher (bit depth, sample rate reduction) - [x] Bitcrusher (bit depth and sample rate reduction)
- [x] AdvancedParameterDialog component with visual waveform feedback
- [x] Integration with command palette and side panel
- [x] 4 presets per effect type
- [x] Undo/redo support for all advanced effects
#### 6.6 Effect Management #### 6.6 Effect Management
- [ ] Effect rack/chain - [x] Effect rack/chain (EffectRack component with drag-and-drop)
- [ ] Effect presets - [x] Effect presets (save/load/import/export presets)
- [ ] Effect bypass toggle - [x] Effect bypass toggle (enable/disable individual effects)
- [ ] Wet/Dry mix control - [x] Effect chain state management (useEffectChain hook)
- [ ] Effect reordering - [x] Effect reordering (drag-and-drop within chain)
- [x] Chain tab in SidePanel with preset manager
- [x] localStorage persistence for chains and presets
- [ ] Wet/Dry mix control (per-effect) - FUTURE
- [ ] Real-time effect preview - FUTURE
### Phase 7: Multi-Track Support ### Phase 7: Multi-Track Support (In Progress - Core Features Complete)
#### 7.1 Track Management #### 7.1 Track Management
- [ ] Add/Remove tracks - [x] Add/Remove tracks
- [ ] Track reordering (drag-and-drop) - [x] Track reordering (drag-and-drop) - UI ready
- [ ] Track naming - [x] Track naming (inline editing)
- [ ] Track colors - [x] Track colors (9 preset colors)
- [ ] Track groups/folders (optional) - [ ] Track groups/folders (optional) - FUTURE
#### 7.2 Track Controls #### 7.2 Track Controls
- [ ] Solo/Mute per track - [x] Solo/Mute per track
- [ ] Volume fader per track - [x] Volume fader per track (0-100%)
- [ ] Pan knob per track - [x] Pan control per track (L-C-R)
- [ ] Record enable per track - [x] Record enable per track (UI ready)
- [ ] Track height adjustment - [x] Track height adjustment (60-300px)
- [x] Track collapse/expand
#### 7.3 Mixer #### 7.3 Multi-Track Playback ✓
- [ ] Master volume - [x] Real-time multi-track audio mixing
- [ ] Master output meter - [x] Synchronized playback across tracks
- [ ] Track routing - [x] Per-track gain and pan control
- [ ] Send/Return effects - [x] Solo/Mute handling during playback
- [ ] Sidechain support (advanced) - [ ] Master volume - FUTURE
- [ ] Master output meter - FUTURE
- [ ] Track routing - FUTURE
- [ ] Send/Return effects - FUTURE
- [ ] Sidechain support (advanced) - FUTURE
#### 7.4 Track Effects #### 7.4 Track Effects (Complete)
- [ ] Per-track effect chain - [x] Per-track effect chain
- [ ] Effect rack UI - [x] Effect rack UI
- [ ] Effect bypass per track - [x] Effect bypass per track
- [x] Real-time effect processing during playback
- [x] Add/remove effects during playback
- [x] Real-time parameter updates
- [x] Effect chain persistence (localStorage)
### Phase 8: Recording ### Phase 8: Recording
#### 8.1 Audio Input #### 8.1 Audio Input
- [ ] Microphone permission request - [x] Microphone permission request
- [ ] Audio input device selection - [x] Audio input device selection
- [ ] Input level meter - [x] Input level meter
- [ ] Input monitoring (with latency compensation) - [x] Input monitoring (real-time level display)
#### 8.2 Recording Controls #### 8.2 Recording Controls
- [ ] Arm recording - [x] Arm recording (per-track record enable)
- [ ] Start/Stop recording - [x] Start/Stop recording (global record button)
- [ ] Punch-in/Punch-out recording - [x] Recording indicator (visual feedback with pulse animation)
- [ ] Overdub mode - [x] Punch-in/Punch-out recording (UI controls with time inputs)
- [ ] Recording indicator - [x] Overdub mode (mix recorded audio with existing audio)
#### 8.3 Recording Settings #### 8.3 Recording Settings
- [ ] Sample rate matching - [x] Sample rate matching (44.1kHz, 48kHz, 96kHz)
- [ ] Input gain control - [x] Input gain control (0.0-2.0 with dB display)
- [ ] Mono/Stereo recording - [x] Mono/Stereo recording selection
- [ ] File naming conventions - [x] Real-time gain adjustment during recording
- 🔲 File naming conventions (Future: Auto-name recorded tracks)
### Phase 9: Automation ### Phase 9: Automation
#### 9.1 Automation Lanes #### 9.1 Automation Lanes
- [ ] Show/Hide automation lanes per track - [x] Show/Hide automation lanes per track
- [ ] Automation lane for volume - [x] Automation lane for volume
- [ ] Automation lane for pan - [x] Automation lane for pan
- [ ] Automation lanes for effect parameters - [x] Automation lanes for effect parameters
- [x] Single automation lane with parameter dropdown selector
- [x] Automation controls in left sidebar (matching track controls)
- [x] Perfect alignment with waveform width
#### 9.2 Automation Points #### 9.2 Automation Points
- [ ] Add/Remove automation points - [x] Add/Remove automation points
- [ ] Drag automation points - [x] Drag automation points
- [ ] Automation curves (linear/bezier/step) - [x] Automation curves (linear/step support)
- [x] Select automation points
- [x] Delete with keyboard (Delete/Backspace)
- [ ] Copy/Paste automation - [ ] Copy/Paste automation
- [ ] Bezier curves
#### 9.3 Automation Playback #### 9.3 Automation Playback & Recording
- [ ] Real-time automation during playback - [x] Real-time automation during playback
- [ ] Automation recording (write mode) - [x] Automation for volume and pan
- [ ] Automation editing modes (read/write/touch/latch) - [x] Automation for effect parameters
- [x] Continuous evaluation via requestAnimationFrame
- [x] Proper parameter range conversion
- [x] Automation recording (write mode) - Volume, Pan, Effect Parameters
- [x] Automation editing modes UI (read/write/touch/latch)
- [x] Automation modes recording implementation (write/touch/latch)
- [x] Touch/latch mode tracking with control interaction
- [x] Throttled automation point creation (every ~100ms)
- [x] Parameter touch callbacks for volume and pan controls
- [x] Parameter touch callbacks for effect parameter sliders
- [x] Touch/latch modes for effect parameters (frequency, Q, gain, etc.)
- [x] Proper prop passing through EffectDevice → EffectParameters → Slider
### Phase 10: Analysis Tools ### Phase 10: Analysis Tools
#### 10.1 Frequency Analyzer #### 10.1 Frequency Analyzer
- [ ] Real-time FFT analyzer - [x] Real-time FFT analyzer
- [ ] Frequency spectrum display - [x] Frequency spectrum display
- [ ] Peak/Average display modes - [ ] Peak/Average display modes
- [ ] Logarithmic/Linear frequency scale - [ ] Logarithmic/Linear frequency scale
#### 10.2 Spectrogram #### 10.2 Spectrogram
- [ ] Time-frequency spectrogram view - [x] Time-frequency spectrogram view
- [ ] Color scale customization - [x] Color scale customization (heat map: black/gray → blue → cyan → green → yellow → red)
- [ ] FFT size configuration - [x] FFT size configuration (uses analyserNode.frequencyBinCount)
- [ ] Overlay on waveform (optional) - [ ] Overlay on waveform (optional)
#### 10.3 Metering #### 10.3 Metering
- [ ] Peak meter - [x] Peak meter (master and per-track)
- [ ] RMS meter - [x] RMS meter (master and per-track)
- [ ] Phase correlation meter - [x] Phase correlation meter
- [ ] Loudness meter (LUFS - optional) - [x] Loudness meter (LUFS with momentary/short-term/integrated)
- [ ] Clip indicator - [x] Clip indicator (master only)
#### 10.4 Audio Statistics #### 10.4 Audio Statistics
- [ ] File duration - [x] File duration
- [ ] Sample rate, bit depth, channels - [x] Sample rate, bit depth, channels
- [ ] Peak amplitude - [x] Peak amplitude
- [ ] RMS level - [x] RMS level
- [ ] Dynamic range - [x] Dynamic range
- [x] Headroom calculation
### Phase 11: Export & Import ### Phase 11: Export & Import (Phase 11.1, 11.2, 11.3 Complete)
#### 11.1 Export Formats #### 11.1 Export Formats ✅ COMPLETE
- [ ] WAV export (PCM, various bit depths) - [x] WAV export (PCM, various bit depths: 16/24/32-bit)
- [ ] MP3 export (using lamejs) - [x] Export dialog with settings UI
- [ ] OGG Vorbis export - [x] Export button in header
- [ ] FLAC export (using fflate) - [x] Mix all tracks before export
- [ ] Format selection UI - [x] MP3 export (using lamejs with dynamic import)
- [x] FLAC export (using fflate DEFLATE compression)
- [ ] OGG Vorbis export (skipped - no good browser encoder available)
#### 11.2 Export Settings **Technical Implementation:**
- MP3 encoding with lamejs: 1152 sample block size, configurable bitrate
- FLAC compression with fflate: DEFLATE-based lossless compression
- TypeScript declarations for lamejs module
- Async/await for dynamic imports to reduce bundle size
- Format-specific UI controls in ExportDialog
#### 11.2 Export Settings ✅ COMPLETE
- [x] Bit depth selection (16/24/32-bit) for WAV and FLAC
- [x] Normalization before export (with 1% headroom)
- [x] Filename customization with dynamic extension display
- [x] Quality/bitrate settings:
- MP3: Bitrate selector (128/192/256/320 kbps)
- FLAC: Compression quality slider (0-9, fast to small)
- [x] Format selector dropdown (WAV/MP3/FLAC)
- [ ] Sample rate conversion - [ ] Sample rate conversion
- [ ] Bit depth selection
- [ ] Quality/bitrate settings (for lossy formats)
- [ ] Dithering options - [ ] Dithering options
- [ ] Normalization before export
#### 11.3 Export Regions #### 11.3 Export Regions ✅ COMPLETE
- [ ] Export entire project - [x] Export entire project (mix all tracks)
- [ ] Export selected region - [x] Export selected region (extract and mix selection from all tracks)
- [ ] Batch export all regions - [x] Export individual tracks (separate files with sanitized names)
- [ ] Export individual tracks - [ ] Batch export all regions (future feature)
#### 11.4 Import #### 11.4 Import ✅ COMPLETE
- [ ] Support for WAV, MP3, OGG, FLAC, M4A, AIFF - [x] Support for WAV, MP3, OGG, FLAC, M4A, AIFF
- [ ] Sample rate conversion on import - [x] Sample rate conversion on import
- [ ] Stereo to mono conversion - [x] Stereo to mono conversion
- [ ] File metadata reading - [x] File metadata reading (codec detection, duration, channels, sample rate)
- [x] ImportOptions interface for flexible import configuration
- [x] importAudioFile() function returning buffer + metadata
- [x] Normalize on import option
- [x] Import settings dialog component (ready for integration)
### Phase 12: Project Management ### Phase 12: Project Management
#### 12.1 Save/Load Projects #### 12.1 Save/Load Projects
- [ ] Save project to IndexedDB - [x] Save project to IndexedDB
- [ ] Load project from IndexedDB - [x] Load project from IndexedDB
- [ ] Project list UI - [x] Project list UI (Projects dialog)
- [ ] Auto-save functionality - [x] Auto-save functionality (3-second debounce)
- [ ] Save-as functionality - [x] Manual save with Ctrl+S
- [x] Auto-load last project on startup
- [x] Editable project name in header
- [x] Delete and duplicate projects
#### 12.2 Project Structure #### 12.2 Project Structure
- [ ] JSON project format - [x] IndexedDB storage with serialization
- [ ] Track information - [x] Track information (name, color, volume, pan, mute, solo)
- [ ] Audio buffer references - [x] Audio buffer serialization (Float32Array per channel)
- [ ] Effect settings - [x] Effect settings (serialized effect chains)
- [ ] Automation data - [x] Automation data (deep cloned to remove functions)
- [ ] Region markers - [x] Project metadata (name, description, duration, track count)
#### 12.3 Project Export/Import #### 12.3 Project Export/Import
- [ ] Export project as JSON (with audio files) - [x] Export project as JSON (with audio data embedded)
- [ ] Import project from JSON - [x] Import project from JSON
- [ ] Project templates - [x] Export button per project in Projects dialog
- [x] Import button in Projects dialog header
- [x] Auto-generate new IDs on import to avoid conflicts
- [ ] Project templates (future enhancement)
#### 12.4 Project Settings #### 12.4 Project Settings
- [ ] Sample rate - [x] Sample rate (stored per project)
- [ ] Bit depth - [x] Zoom level (persisted)
- [ ] Default track count - [x] Current time (persisted)
- [ ] Project name/description - [x] Project name/description
- [x] Created/updated timestamps
### Phase 13: Keyboard Shortcuts ### Phase 13: Keyboard Shortcuts
#### 13.1 Playback Shortcuts #### 13.1 Playback Shortcuts
- [ ] Spacebar - Play/Pause - [x] Spacebar - Play/Pause
- [ ] Home - Go to start - [x] Home - Go to start
- [ ] End - Go to end - [x] End - Go to end
- [ ] Left/Right Arrow - Move cursor - [x] Left/Right Arrow - Seek ±1 second
- [ ] Ctrl+Left/Right - Move by larger increment - [x] Ctrl+Left/Right - Seek ±5 seconds
#### 13.2 Editing Shortcuts #### 13.2 Editing Shortcuts
- [ ] Ctrl+Z - Undo - [x] Ctrl+Z - Undo
- [ ] Ctrl+Y / Ctrl+Shift+Z - Redo - [x] Ctrl+Y / Ctrl+Shift+Z - Redo
- [ ] Ctrl+X - Cut - [x] Ctrl+X - Cut
- [ ] Ctrl+C - Copy - [x] Ctrl+C - Copy
- [ ] Ctrl+V - Paste - [x] Ctrl+V - Paste
- [ ] Delete - Delete selection - [x] Ctrl+S - Save project
- [ ] Ctrl+A - Select All - [x] Ctrl+D - Duplicate selection
- [ ] Escape - Clear selection - [x] Delete/Backspace - Delete selection
- [x] Ctrl+A - Select All (on current track)
- [x] Escape - Clear selection
#### 13.3 View Shortcuts #### 13.3 View Shortcuts
- [ ] Ctrl+Plus - Zoom in - [x] Ctrl+Plus/Equals - Zoom in
- [ ] Ctrl+Minus - Zoom out - [x] Ctrl+Minus - Zoom out
- [ ] Ctrl+0 - Fit to window - [x] Ctrl+0 - Fit to window
- [ ] F - Toggle fullscreen (optional) - [ ] F - Toggle fullscreen (browser native)
#### 13.4 Custom Shortcuts #### 13.4 Custom Shortcuts
- [ ] Keyboard shortcuts manager - [ ] Keyboard shortcuts manager (future enhancement)
- [ ] User-configurable shortcuts - [ ] User-configurable shortcuts (future enhancement)
- [ ] Shortcut conflict detection - [ ] Shortcut conflict detection (future enhancement)
### Phase 14: Settings & Preferences ### Phase 14: Settings & Preferences ✅ COMPLETE
#### 14.1 Audio Settings **✅ Accomplished:**
- [ ] Audio output device selection - Global settings system with localStorage persistence
- [ ] Buffer size/latency configuration - Settings dialog with 5 tabs (Recording, Audio, Editor, Interface, Performance)
- [ ] Sample rate preference - Real-time settings application to editor behavior
- [ ] Auto-normalize on import - Category-specific reset buttons
- Merge with defaults on load for backward compatibility
#### 14.2 UI Settings #### 14.1 Audio Settings
- [ ] Theme selection (dark/light/auto) - [ ] Audio output device selection (future: requires device enumeration API)
- [ ] Color scheme customization - [x] Buffer size/latency configuration
- [ ] Waveform colors - [x] Sample rate preference (applied to recording)
- [ ] Font size - [x] Auto-normalize on import
#### 14.3 Editor Settings #### 14.2 UI Settings
- [ ] Auto-save interval - [x] Theme selection (dark/light/auto)
- [ ] Undo history limit - [x] Font size (small/medium/large)
- [ ] Snap-to-grid toggle - [x] Default track height (120-400px, applied to new tracks)
- [ ] Grid resolution - [ ] Color scheme customization (future: advanced theming)
- [ ] Default zoom level
#### 14.4 Performance Settings #### 14.3 Editor Settings
- [ ] Peak calculation quality - [x] Auto-save interval (0-60 seconds)
- [ ] Waveform rendering quality - [x] Undo history limit (10-200 operations)
- [ ] Enable/disable spectrogram - [x] Snap-to-grid toggle
- [ ] Maximum file size limit - [x] Grid resolution (0.1-10 seconds)
- [x] Default zoom level (1-20x, applied to initial state)
#### 14.4 Performance Settings ✅
- [x] Peak calculation quality (low/medium/high)
- [x] Waveform rendering quality (low/medium/high)
- [x] Enable/disable spectrogram (applied to analyzer visibility)
- [x] Maximum file size limit (100-1000 MB)
### Phase 15: Polish & Optimization ### Phase 15: Polish & Optimization
#### 15.1 Performance Optimization #### 15.1 Performance Optimization
- [ ] Web Worker for heavy computations - [ ] Web Worker for heavy computations
- [ ] AudioWorklet for custom processing - [ ] AudioWorklet for custom processing
- [ ] Lazy loading for effects - [x] Lazy loading for dialogs and analysis components (GlobalSettingsDialog, ExportDialog, ProjectsDialog, ImportTrackDialog, FrequencyAnalyzer, Spectrogram, PhaseCorrelationMeter, LUFSMeter, AudioStatistics)
- [ ] Code splitting for route optimization - [ ] Code splitting for route optimization
- [ ] Memory leak prevention - [x] Memory leak prevention (audio-cleanup utilities, proper cleanup in useRecording, animation frame cancellation in visualizations)
#### 15.2 Responsive Design #### 15.2 Responsive Design
- [ ] Mobile-friendly layout - [x] Mobile-friendly layout (responsive header, adaptive toolbar with icon-only buttons on small screens)
- [ ] Touch gesture support - [x] Touch gesture support (collapse/expand controls with chevron buttons)
- [ ] Adaptive toolbar (hide on mobile) - [x] Adaptive toolbar (hide less critical buttons on mobile: Export on md, Clear All on lg)
- [ ] Vertical scrolling for track list - [x] Vertical scrolling for track list (sidebar hidden on mobile < lg breakpoint)
- [x] Collapsible track controls (two-state mobile: collapsed with minimal controls + horizontal meter, expanded with full height fader + pan control; desktop always expanded with narrow borders)
- [x] Collapsible master controls (collapsed view with horizontal level meter, expanded view with full controls; collapse button hidden on desktop)
- [x] Track collapse buttons on mobile (left chevron: collapses/expands track in list, right chevron: collapses/expands track controls)
- [x] Mobile vertical stacking layout (< lg breakpoint: controls → waveform → automation bars → effects bars per track, master controls and transport controls stacked vertically in bottom bar)
- [x] Desktop two-column layout (≥ lg breakpoint: controls left sidebar, waveforms right panel with automation/effects bars, master controls in right sidebar, transport controls centered in bottom bar)
- [x] Automation and effects bars on mobile (collapsible with eye/eye-off icons, horizontally scrollable, full functionality: parameter selection, mode cycling, height controls, add effects)
- [x] Height synchronization (track controls and waveform container heights match exactly using user-configurable track.height on desktop)
#### 15.3 Error Handling #### 15.3 Error Handling
- [ ] Graceful error messages - [x] Graceful error messages (toast notifications for copy/paste/edit operations)
- [ ] File format error handling - [x] File format error handling (UnsupportedFormatDialog with format validation and decode error catching)
- [ ] Memory limit warnings - [x] Memory limit warnings (MemoryWarningDialog with file size checks)
- [ ] Browser compatibility checks - [x] Browser compatibility checks (BrowserCompatDialog with Web Audio API detection)
#### 15.4 Documentation #### 15.4 Documentation
- [ ] User guide - [ ] User guide
- [ ] Keyboard shortcuts reference - [x] Keyboard shortcuts reference (KeyboardShortcutsDialog with ? shortcut and command palette integration)
- [ ] Effect descriptions - [ ] Effect descriptions
- [ ] Troubleshooting guide - [ ] Troubleshooting guide

View File

@@ -19,97 +19,97 @@
/* CSS Variables for theming */ /* CSS Variables for theming */
@layer base { @layer base {
:root { :root {
/* Light mode colors using OKLCH */ /* Light mode colors using OKLCH - bright neon palette */
--background: oklch(100% 0 0); --background: oklch(98% 0.03 180);
--foreground: oklch(9.8% 0.038 285.8); --foreground: oklch(20% 0.12 310);
--card: oklch(100% 0 0); --card: oklch(99% 0.02 200);
--card-foreground: oklch(9.8% 0.038 285.8); --card-foreground: oklch(20% 0.12 310);
--popover: oklch(100% 0 0); --popover: oklch(99% 0.02 200);
--popover-foreground: oklch(9.8% 0.038 285.8); --popover-foreground: oklch(20% 0.12 310);
--primary: oklch(22.4% 0.053 285.8); --primary: oklch(58% 0.28 320);
--primary-foreground: oklch(98% 0 0); --primary-foreground: oklch(99% 0.02 200);
--secondary: oklch(96.1% 0 0); --secondary: oklch(92% 0.08 200);
--secondary-foreground: oklch(13.8% 0.038 285.8); --secondary-foreground: oklch(25% 0.15 300);
--muted: oklch(96.1% 0 0); --muted: oklch(94% 0.05 190);
--muted-foreground: oklch(45.1% 0.015 285.9); --muted-foreground: oklch(40% 0.12 260);
--accent: oklch(96.1% 0 0); --accent: oklch(90% 0.12 180);
--accent-foreground: oklch(13.8% 0.038 285.8); --accent-foreground: oklch(25% 0.18 310);
--destructive: oklch(60.2% 0.168 29.2); --destructive: oklch(60% 0.28 15);
--destructive-foreground: oklch(98% 0 0); --destructive-foreground: oklch(99% 0.02 200);
--border: oklch(89.8% 0 0); --border: oklch(85% 0.08 200);
--input: oklch(89.8% 0 0); --input: oklch(92% 0.06 190);
--ring: oklch(22.4% 0.053 285.8); --ring: oklch(58% 0.28 320);
--radius: 0.5rem; --radius: 0.5rem;
--success: oklch(60% 0.15 145); --success: oklch(58% 0.25 160);
--success-foreground: oklch(98% 0 0); --success-foreground: oklch(99% 0.02 200);
--warning: oklch(75% 0.15 85); --warning: oklch(68% 0.25 85);
--warning-foreground: oklch(20% 0 0); --warning-foreground: oklch(20% 0.12 310);
--info: oklch(65% 0.15 240); --info: oklch(62% 0.25 240);
--info-foreground: oklch(98% 0 0); --info-foreground: oklch(99% 0.02 200);
/* Audio-specific colors */ /* Audio-specific colors - neon cyan/magenta */
--waveform: oklch(50% 0.1 240); --waveform: oklch(60% 0.26 200);
--waveform-progress: oklch(60% 0.15 145); --waveform-progress: oklch(58% 0.28 320);
--waveform-selection: oklch(65% 0.15 240); --waveform-selection: oklch(62% 0.26 180);
--waveform-bg: oklch(98% 0 0); --waveform-bg: oklch(99% 0.015 190);
} }
.dark { .dark {
/* Dark mode colors using OKLCH */ /* Dark mode colors using OKLCH - vibrant neon palette */
--background: oklch(9.8% 0.038 285.8); --background: oklch(15% 0.015 265);
--foreground: oklch(98% 0 0); --foreground: oklch(92% 0.02 180);
--card: oklch(9.8% 0.038 285.8); --card: oklch(18% 0.02 270);
--card-foreground: oklch(98% 0 0); --card-foreground: oklch(92% 0.02 180);
--popover: oklch(9.8% 0.038 285.8); --popover: oklch(18% 0.02 270);
--popover-foreground: oklch(98% 0 0); --popover-foreground: oklch(92% 0.02 180);
--primary: oklch(98% 0 0); --primary: oklch(75% 0.25 310);
--primary-foreground: oklch(13.8% 0.038 285.8); --primary-foreground: oklch(18% 0.02 270);
--secondary: oklch(17.7% 0.038 285.8); --secondary: oklch(22% 0.03 280);
--secondary-foreground: oklch(98% 0 0); --secondary-foreground: oklch(85% 0.15 180);
--muted: oklch(17.7% 0.038 285.8); --muted: oklch(20% 0.02 270);
--muted-foreground: oklch(63.9% 0.012 285.9); --muted-foreground: oklch(65% 0.1 200);
--accent: oklch(17.7% 0.038 285.8); --accent: oklch(25% 0.03 290);
--accent-foreground: oklch(98% 0 0); --accent-foreground: oklch(85% 0.2 320);
--destructive: oklch(50% 0.2 29.2); --destructive: oklch(65% 0.25 20);
--destructive-foreground: oklch(98% 0 0); --destructive-foreground: oklch(92% 0.02 180);
--border: oklch(17.7% 0.038 285.8); --border: oklch(30% 0.05 280);
--input: oklch(17.7% 0.038 285.8); --input: oklch(22% 0.03 280);
--ring: oklch(83.1% 0.012 285.9); --ring: oklch(75% 0.25 310);
--success: oklch(55% 0.15 145); --success: oklch(70% 0.22 160);
--success-foreground: oklch(98% 0 0); --success-foreground: oklch(18% 0.02 270);
--warning: oklch(70% 0.15 85); --warning: oklch(75% 0.22 80);
--warning-foreground: oklch(20% 0 0); --warning-foreground: oklch(18% 0.02 270);
--info: oklch(60% 0.15 240); --info: oklch(72% 0.22 240);
--info-foreground: oklch(98% 0 0); --info-foreground: oklch(18% 0.02 270);
/* Audio-specific colors */ /* Audio-specific colors - neon cyan/magenta */
--waveform: oklch(70% 0.15 240); --waveform: oklch(72% 0.25 200);
--waveform-progress: oklch(65% 0.15 145); --waveform-progress: oklch(75% 0.25 310);
--waveform-selection: oklch(70% 0.15 240); --waveform-selection: oklch(70% 0.25 180);
--waveform-bg: oklch(12% 0.038 285.8); --waveform-bg: oklch(12% 0.02 270);
} }
} }
@@ -158,6 +158,41 @@
@apply bg-background text-foreground; @apply bg-background text-foreground;
font-feature-settings: "rlig" 1, "calt" 1; font-feature-settings: "rlig" 1, "calt" 1;
} }
/* Apply custom scrollbar globally */
* {
scrollbar-width: thin;
}
*::-webkit-scrollbar {
width: 10px;
height: 10px;
}
*::-webkit-scrollbar-track {
background: var(--muted);
border-radius: 5px;
}
*::-webkit-scrollbar-thumb {
background: color-mix(in oklch, var(--muted-foreground) 30%, transparent);
border-radius: 5px;
border: 2px solid var(--muted);
transition: background 0.2s ease;
}
*::-webkit-scrollbar-thumb:hover {
background: color-mix(in oklch, var(--muted-foreground) 50%, transparent);
}
*::-webkit-scrollbar-thumb:active {
background: color-mix(in oklch, var(--muted-foreground) 70%, transparent);
}
/* Scrollbar corners */
*::-webkit-scrollbar-corner {
background: var(--muted);
}
} }
/* Custom animations */ /* Custom animations */
@@ -290,22 +325,60 @@
/* Custom scrollbar */ /* Custom scrollbar */
@layer utilities { @layer utilities {
.custom-scrollbar {
scrollbar-width: thin;
scrollbar-color: color-mix(in oklch, var(--muted-foreground) 30%, transparent) var(--muted);
}
.custom-scrollbar::-webkit-scrollbar { .custom-scrollbar::-webkit-scrollbar {
width: 8px; width: 8px;
height: 8px; height: 8px;
} }
.custom-scrollbar::-webkit-scrollbar-track { .custom-scrollbar::-webkit-scrollbar-track {
@apply bg-muted; background: var(--muted);
border-radius: 4px; border-radius: 4px;
} }
.custom-scrollbar::-webkit-scrollbar-thumb { .custom-scrollbar::-webkit-scrollbar-thumb {
@apply bg-muted-foreground/30; background: color-mix(in oklch, var(--muted-foreground) 30%, transparent);
border-radius: 4px; border-radius: 4px;
border: 2px solid var(--muted);
transition: background 0.2s ease;
} }
.custom-scrollbar::-webkit-scrollbar-thumb:hover { .custom-scrollbar::-webkit-scrollbar-thumb:hover {
@apply bg-muted-foreground/50; background: color-mix(in oklch, var(--muted-foreground) 50%, transparent);
}
.custom-scrollbar::-webkit-scrollbar-thumb:active {
background: color-mix(in oklch, var(--muted-foreground) 70%, transparent);
}
/* Clip/Region styling for Ableton-style appearance */
.track-clip-container {
@apply absolute inset-2 rounded-sm shadow-sm overflow-hidden transition-all duration-150;
background: oklch(0.2 0.01 var(--hue) / 0.3);
border: 1px solid oklch(0.4 0.02 var(--hue) / 0.5);
}
.track-clip-container:hover {
border-color: oklch(0.5 0.03 var(--hue) / 0.7);
}
.track-clip-header {
@apply absolute top-0 left-0 right-0 h-4 pointer-events-none z-10 px-2 flex items-center;
background: linear-gradient(to bottom, rgb(0 0 0 / 0.1), transparent);
}
}
@layer utilities {
[data-theme='light'] .track-clip-container {
background: oklch(0.95 0.01 var(--hue) / 0.3);
border: 1px solid oklch(0.7 0.02 var(--hue) / 0.5);
}
[data-theme='light'] .track-clip-header {
background: linear-gradient(to bottom, rgb(255 255 255 / 0.15), transparent);
} }
} }

View File

@@ -1,76 +1,17 @@
'use client'; 'use client';
import * as React from 'react'; import * as React from 'react';
import { Music, Settings } from 'lucide-react'; import { Music } from 'lucide-react';
import { ThemeToggle } from '@/components/layout/ThemeToggle'; import { ThemeToggle } from '@/components/layout/ThemeToggle';
import { Button } from '@/components/ui/Button';
import { ToastProvider } from '@/components/ui/Toast'; import { ToastProvider } from '@/components/ui/Toast';
import { AudioEditor } from '@/components/editor/AudioEditor'; import { AudioEditor } from '@/components/editor/AudioEditor';
export default function Home() { export default function Home() {
return ( return (
<ToastProvider> <ToastProvider>
<div className="min-h-screen bg-background"> <div className="flex flex-col h-screen bg-background overflow-hidden">
{/* Header */} {/* Audio Editor (includes header) */}
<header className="border-b border-border sticky top-0 bg-background/95 backdrop-blur supports-[backdrop-filter]:bg-background/60 z-50"> <AudioEditor />
<div className="container mx-auto px-3 sm:px-4 py-3 sm:py-4 flex items-center justify-between gap-2">
<div className="min-w-0 flex-1 flex items-center gap-3">
<Music className="h-6 w-6 text-primary flex-shrink-0" />
<div className="min-w-0">
<h1 className="text-xl sm:text-2xl font-bold text-foreground">Audio UI</h1>
<p className="text-xs sm:text-sm text-muted-foreground truncate">
Professional audio editing in your browser
</p>
</div>
</div>
<div className="flex items-center gap-2">
<Button
variant="ghost"
size="icon"
title="Settings"
>
<Settings className="h-5 w-5" />
</Button>
<ThemeToggle />
</div>
</div>
</header>
{/* Main content */}
<main className="container mx-auto px-3 sm:px-4 py-6 sm:py-8">
<div className="max-w-6xl mx-auto">
<AudioEditor />
</div>
</main>
{/* Footer */}
<footer className="border-t border-border mt-8 sm:mt-12">
<div className="container mx-auto px-3 sm:px-4 py-6 text-center text-xs sm:text-sm text-muted-foreground">
<p>
Powered by{' '}
<a
href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API"
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline"
>
Web Audio API
</a>
{' '}and{' '}
<a
href="https://nextjs.org"
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline"
>
Next.js 16
</a>
</p>
<p className="mt-2">
All audio processing happens locally in your browser. No files are uploaded.
</p>
</div>
</footer>
</div> </div>
</ToastProvider> </ToastProvider>
); );

View File

@@ -0,0 +1,159 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
import type { Track } from '@/types/track';
export interface AudioStatisticsProps {
tracks: Track[];
className?: string;
}
export function AudioStatistics({ tracks, className }: AudioStatisticsProps) {
const stats = React.useMemo(() => {
if (tracks.length === 0) {
return {
totalDuration: 0,
longestTrack: 0,
sampleRate: 0,
channels: 0,
bitDepth: 32,
peakAmplitude: 0,
rmsLevel: 0,
dynamicRange: 0,
trackCount: 0,
};
}
let maxDuration = 0;
let maxPeak = 0;
let sumRms = 0;
let minPeak = 1;
let sampleRate = 0;
let channels = 0;
tracks.forEach(track => {
if (!track.audioBuffer) return;
const duration = track.audioBuffer.duration;
maxDuration = Math.max(maxDuration, duration);
// Get sample rate and channels from first track
if (sampleRate === 0) {
sampleRate = track.audioBuffer.sampleRate;
channels = track.audioBuffer.numberOfChannels;
}
// Calculate peak and RMS from buffer
for (let ch = 0; ch < track.audioBuffer.numberOfChannels; ch++) {
const channelData = track.audioBuffer.getChannelData(ch);
let chPeak = 0;
let chRmsSum = 0;
for (let i = 0; i < channelData.length; i++) {
const abs = Math.abs(channelData[i]);
chPeak = Math.max(chPeak, abs);
chRmsSum += channelData[i] * channelData[i];
}
maxPeak = Math.max(maxPeak, chPeak);
minPeak = Math.min(minPeak, chPeak);
sumRms += Math.sqrt(chRmsSum / channelData.length);
}
});
const avgRms = sumRms / (tracks.length * Math.max(1, channels));
const peakDb = maxPeak > 0 ? 20 * Math.log10(maxPeak) : -Infinity;
const rmsDb = avgRms > 0 ? 20 * Math.log10(avgRms) : -Infinity;
const dynamicRange = peakDb - rmsDb;
return {
totalDuration: maxDuration,
longestTrack: maxDuration,
sampleRate,
channels,
bitDepth: 32, // Web Audio API uses 32-bit float
peakAmplitude: maxPeak,
rmsLevel: avgRms,
dynamicRange: dynamicRange > 0 ? dynamicRange : 0,
trackCount: tracks.length,
};
}, [tracks]);
const formatDuration = (seconds: number) => {
const mins = Math.floor(seconds / 60);
const secs = Math.floor(seconds % 60);
const ms = Math.floor((seconds % 1) * 1000);
return `${mins}:${secs.toString().padStart(2, '0')}.${ms.toString().padStart(3, '0')}`;
};
const formatDb = (linear: number) => {
if (linear === 0) return '-∞ dB';
const db = 20 * Math.log10(linear);
return db > -60 ? `${db.toFixed(1)} dB` : '-∞ dB';
};
return (
<div className={cn('w-full h-full bg-card/50 border-2 border-accent/50 rounded-lg p-3', className)}>
<div className="text-[10px] font-bold text-accent uppercase tracking-wider mb-3">
Audio Statistics
</div>
<div className="space-y-2 text-[10px]">
{/* File Info */}
<div className="space-y-1">
<div className="text-[9px] text-muted-foreground uppercase tracking-wide">Project Info</div>
<div className="grid grid-cols-2 gap-x-2 gap-y-1">
<div className="text-muted-foreground">Tracks:</div>
<div className="font-mono text-right">{stats.trackCount}</div>
<div className="text-muted-foreground">Duration:</div>
<div className="font-mono text-right">{formatDuration(stats.totalDuration)}</div>
<div className="text-muted-foreground">Sample Rate:</div>
<div className="font-mono text-right">{stats.sampleRate > 0 ? `${(stats.sampleRate / 1000).toFixed(1)} kHz` : 'N/A'}</div>
<div className="text-muted-foreground">Channels:</div>
<div className="font-mono text-right">{stats.channels > 0 ? (stats.channels === 1 ? 'Mono' : 'Stereo') : 'N/A'}</div>
<div className="text-muted-foreground">Bit Depth:</div>
<div className="font-mono text-right">{stats.bitDepth}-bit float</div>
</div>
</div>
{/* Divider */}
<div className="border-t border-border/30" />
{/* Audio Levels */}
<div className="space-y-1">
<div className="text-[9px] text-muted-foreground uppercase tracking-wide">Levels</div>
<div className="grid grid-cols-2 gap-x-2 gap-y-1">
<div className="text-muted-foreground">Peak:</div>
<div className={cn(
'font-mono text-right',
stats.peakAmplitude > 0.99 ? 'text-red-500 font-bold' : ''
)}>
{formatDb(stats.peakAmplitude)}
</div>
<div className="text-muted-foreground">RMS:</div>
<div className="font-mono text-right">{formatDb(stats.rmsLevel)}</div>
<div className="text-muted-foreground">Dynamic Range:</div>
<div className="font-mono text-right">
{stats.dynamicRange > 0 ? `${stats.dynamicRange.toFixed(1)} dB` : 'N/A'}
</div>
<div className="text-muted-foreground">Headroom:</div>
<div className={cn(
'font-mono text-right',
stats.peakAmplitude > 0.99 ? 'text-red-500' :
stats.peakAmplitude > 0.9 ? 'text-yellow-500' : 'text-green-500'
)}>
{stats.peakAmplitude > 0 ? `${(20 * Math.log10(1 / stats.peakAmplitude)).toFixed(1)} dB` : 'N/A'}
</div>
</div>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,91 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface FrequencyAnalyzerProps {
analyserNode: AnalyserNode | null;
className?: string;
}
export function FrequencyAnalyzer({ analyserNode, className }: FrequencyAnalyzerProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const animationFrameRef = React.useRef<number | undefined>(undefined);
React.useEffect(() => {
if (!analyserNode || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Set canvas size
const dpr = window.devicePixelRatio || 1;
const rect = canvas.getBoundingClientRect();
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
ctx.scale(dpr, dpr);
const bufferLength = analyserNode.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
// Get background color from computed styles
const bgColor = getComputedStyle(canvas.parentElement!).backgroundColor;
const draw = () => {
animationFrameRef.current = requestAnimationFrame(draw);
analyserNode.getByteFrequencyData(dataArray);
// Clear canvas with parent background color
ctx.fillStyle = bgColor;
ctx.fillRect(0, 0, rect.width, rect.height);
const barWidth = rect.width / bufferLength;
let x = 0;
for (let i = 0; i < bufferLength; i++) {
const barHeight = (dataArray[i] / 255) * rect.height;
// Color gradient based on frequency
const hue = (i / bufferLength) * 120; // 0 (red) to 120 (green)
ctx.fillStyle = `hsl(${180 + hue}, 70%, 50%)`;
ctx.fillRect(x, rect.height - barHeight, barWidth, barHeight);
x += barWidth;
}
// Draw frequency labels
ctx.fillStyle = 'rgba(255, 255, 255, 0.5)';
ctx.font = '10px monospace';
ctx.textAlign = 'left';
ctx.fillText('20Hz', 5, rect.height - 5);
ctx.textAlign = 'center';
ctx.fillText('1kHz', rect.width / 2, rect.height - 5);
ctx.textAlign = 'right';
ctx.fillText('20kHz', rect.width - 5, rect.height - 5);
};
draw();
return () => {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
};
}, [analyserNode]);
return (
<div className={cn('w-full h-full bg-card/50 border-2 border-accent/50 rounded-lg p-2', className)}>
<div className="text-[10px] font-bold text-accent uppercase tracking-wider mb-2">
Frequency Analyzer
</div>
<div className="w-full h-[calc(100%-24px)] rounded bg-muted/30">
<canvas
ref={canvasRef}
className="w-full h-full rounded"
/>
</div>
</div>
);
}

View File

@@ -0,0 +1,167 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface LUFSMeterProps {
analyserNode: AnalyserNode | null;
className?: string;
}
export function LUFSMeter({ analyserNode, className }: LUFSMeterProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const animationFrameRef = React.useRef<number | undefined>(undefined);
const [lufs, setLufs] = React.useState({ integrated: -23, shortTerm: -23, momentary: -23 });
const lufsHistoryRef = React.useRef<number[]>([]);
React.useEffect(() => {
if (!analyserNode || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Set canvas size
const dpr = window.devicePixelRatio || 1;
const rect = canvas.getBoundingClientRect();
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
ctx.scale(dpr, dpr);
const bufferLength = analyserNode.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
const draw = () => {
animationFrameRef.current = requestAnimationFrame(draw);
analyserNode.getByteFrequencyData(dataArray);
// Calculate RMS from frequency data
let sum = 0;
for (let i = 0; i < bufferLength; i++) {
const normalized = dataArray[i] / 255;
sum += normalized * normalized;
}
const rms = Math.sqrt(sum / bufferLength);
// Convert to LUFS approximation (simplified K-weighting)
// Real LUFS requires proper K-weighting filter, this is an approximation
let lufsValue = -23; // Silence baseline
if (rms > 0.0001) {
lufsValue = 20 * Math.log10(rms) - 0.691; // Simplified LUFS estimation
lufsValue = Math.max(-70, Math.min(0, lufsValue));
}
// Store history for integrated measurement
lufsHistoryRef.current.push(lufsValue);
if (lufsHistoryRef.current.length > 300) { // Keep last 10 seconds at 30fps
lufsHistoryRef.current.shift();
}
// Calculate measurements
const momentary = lufsValue; // Current value
const shortTerm = lufsHistoryRef.current.slice(-90).reduce((a, b) => a + b, 0) / Math.min(90, lufsHistoryRef.current.length); // Last 3 seconds
const integrated = lufsHistoryRef.current.reduce((a, b) => a + b, 0) / lufsHistoryRef.current.length; // All time
setLufs({ integrated, shortTerm, momentary });
// Clear canvas
const bgColor = getComputedStyle(canvas.parentElement!).backgroundColor;
ctx.fillStyle = bgColor;
ctx.fillRect(0, 0, rect.width, rect.height);
// Draw LUFS scale (-70 to 0)
const lufsToY = (lufs: number) => {
return ((0 - lufs) / 70) * rect.height;
};
// Draw reference lines
ctx.strokeStyle = 'rgba(128, 128, 128, 0.2)';
ctx.lineWidth = 1;
[-23, -16, -9, -3].forEach(db => {
const y = lufsToY(db);
ctx.beginPath();
ctx.moveTo(0, y);
ctx.lineTo(rect.width, y);
ctx.stroke();
// Labels
ctx.fillStyle = 'rgba(255, 255, 255, 0.4)';
ctx.font = '9px monospace';
ctx.textAlign = 'right';
ctx.fillText(`${db}`, rect.width - 2, y - 2);
});
// Draw -23 LUFS broadcast standard line
ctx.strokeStyle = 'rgba(59, 130, 246, 0.5)';
ctx.lineWidth = 2;
const standardY = lufsToY(-23);
ctx.beginPath();
ctx.moveTo(0, standardY);
ctx.lineTo(rect.width, standardY);
ctx.stroke();
// Draw bars
const barWidth = rect.width / 4;
const drawBar = (value: number, x: number, color: string, label: string) => {
const y = lufsToY(value);
const height = rect.height - y;
ctx.fillStyle = color;
ctx.fillRect(x, y, barWidth - 4, height);
// Label
ctx.fillStyle = 'rgba(255, 255, 255, 0.7)';
ctx.font = 'bold 9px monospace';
ctx.textAlign = 'center';
ctx.fillText(label, x + barWidth / 2 - 2, rect.height - 2);
};
drawBar(momentary, 0, 'rgba(239, 68, 68, 0.7)', 'M');
drawBar(shortTerm, barWidth, 'rgba(251, 146, 60, 0.7)', 'S');
drawBar(integrated, barWidth * 2, 'rgba(34, 197, 94, 0.7)', 'I');
};
draw();
return () => {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
};
}, [analyserNode]);
return (
<div className={cn('w-full h-full bg-card/50 border-2 border-accent/50 rounded-lg p-2', className)}>
<div className="text-[10px] font-bold text-accent uppercase tracking-wider mb-2">
LUFS Loudness
</div>
<div className="w-full h-[calc(100%-24px)] rounded bg-muted/30 flex flex-col">
<canvas
ref={canvasRef}
className="w-full flex-1 rounded"
/>
<div className="grid grid-cols-3 gap-1 mt-2 text-[9px] font-mono text-center">
<div>
<div className="text-muted-foreground">Momentary</div>
<div className={cn('font-bold', lufs.momentary > -9 ? 'text-red-500' : 'text-foreground')}>
{lufs.momentary > -70 ? lufs.momentary.toFixed(1) : '-∞'}
</div>
</div>
<div>
<div className="text-muted-foreground">Short-term</div>
<div className={cn('font-bold', lufs.shortTerm > -16 ? 'text-orange-500' : 'text-foreground')}>
{lufs.shortTerm > -70 ? lufs.shortTerm.toFixed(1) : '-∞'}
</div>
</div>
<div>
<div className="text-muted-foreground">Integrated</div>
<div className={cn('font-bold', Math.abs(lufs.integrated + 23) < 2 ? 'text-green-500' : 'text-foreground')}>
{lufs.integrated > -70 ? lufs.integrated.toFixed(1) : '-∞'}
</div>
</div>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,181 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface PhaseCorrelationMeterProps {
analyserNode: AnalyserNode | null;
className?: string;
}
export function PhaseCorrelationMeter({ analyserNode, className }: PhaseCorrelationMeterProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const animationFrameRef = React.useRef<number | undefined>(undefined);
const [correlation, setCorrelation] = React.useState(0);
React.useEffect(() => {
if (!analyserNode || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Set canvas size
const dpr = window.devicePixelRatio || 1;
const rect = canvas.getBoundingClientRect();
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
ctx.scale(dpr, dpr);
const audioContext = analyserNode.context as AudioContext;
const bufferLength = analyserNode.fftSize;
const dataArrayL = new Float32Array(bufferLength);
const dataArrayR = new Float32Array(bufferLength);
// Create a splitter to get L/R channels
const splitter = audioContext.createChannelSplitter(2);
const analyserL = audioContext.createAnalyser();
const analyserR = audioContext.createAnalyser();
analyserL.fftSize = bufferLength;
analyserR.fftSize = bufferLength;
// Try to connect to the analyser node's source
// Note: This is a simplified approach - ideally we'd get the source node
try {
analyserNode.connect(splitter);
splitter.connect(analyserL, 0);
splitter.connect(analyserR, 1);
} catch (e) {
// If connection fails, just show static display
}
const draw = () => {
animationFrameRef.current = requestAnimationFrame(draw);
try {
analyserL.getFloatTimeDomainData(dataArrayL);
analyserR.getFloatTimeDomainData(dataArrayR);
// Calculate phase correlation (Pearson correlation coefficient)
let sumL = 0, sumR = 0, sumLR = 0, sumL2 = 0, sumR2 = 0;
const n = bufferLength;
for (let i = 0; i < n; i++) {
sumL += dataArrayL[i];
sumR += dataArrayR[i];
sumLR += dataArrayL[i] * dataArrayR[i];
sumL2 += dataArrayL[i] * dataArrayL[i];
sumR2 += dataArrayR[i] * dataArrayR[i];
}
const meanL = sumL / n;
const meanR = sumR / n;
const covLR = (sumLR / n) - (meanL * meanR);
const varL = (sumL2 / n) - (meanL * meanL);
const varR = (sumR2 / n) - (meanR * meanR);
let r = 0;
if (varL > 0 && varR > 0) {
r = covLR / Math.sqrt(varL * varR);
r = Math.max(-1, Math.min(1, r)); // Clamp to [-1, 1]
}
setCorrelation(r);
// Clear canvas
const bgColor = getComputedStyle(canvas.parentElement!).backgroundColor;
ctx.fillStyle = bgColor;
ctx.fillRect(0, 0, rect.width, rect.height);
// Draw scale background
const centerY = rect.height / 2;
const barHeight = 20;
// Draw scale markers
ctx.fillStyle = 'rgba(128, 128, 128, 0.2)';
ctx.fillRect(0, centerY - barHeight / 2, rect.width, barHeight);
// Draw center line (0)
ctx.strokeStyle = 'rgba(128, 128, 128, 0.5)';
ctx.lineWidth = 1;
ctx.beginPath();
ctx.moveTo(rect.width / 2, centerY - barHeight / 2 - 5);
ctx.lineTo(rect.width / 2, centerY + barHeight / 2 + 5);
ctx.stroke();
// Draw correlation indicator
const x = ((r + 1) / 2) * rect.width;
// Color based on correlation value
let color;
if (r > 0.9) {
color = '#10b981'; // Green - good correlation (mono-ish)
} else if (r > 0.5) {
color = '#84cc16'; // Lime - moderate correlation
} else if (r > -0.5) {
color = '#eab308'; // Yellow - decorrelated (good stereo)
} else if (r > -0.9) {
color = '#f97316'; // Orange - negative correlation
} else {
color = '#ef4444'; // Red - phase issues
}
ctx.fillStyle = color;
ctx.fillRect(x - 2, centerY - barHeight / 2, 4, barHeight);
// Draw labels
ctx.fillStyle = 'rgba(255, 255, 255, 0.7)';
ctx.font = '9px monospace';
ctx.textAlign = 'left';
ctx.fillText('-1', 2, centerY - barHeight / 2 - 8);
ctx.textAlign = 'center';
ctx.fillText('0', rect.width / 2, centerY - barHeight / 2 - 8);
ctx.textAlign = 'right';
ctx.fillText('+1', rect.width - 2, centerY - barHeight / 2 - 8);
// Draw correlation value
ctx.textAlign = 'center';
ctx.font = 'bold 11px monospace';
ctx.fillText(r.toFixed(3), rect.width / 2, centerY + barHeight / 2 + 15);
} catch (e) {
// Silently handle errors
}
};
draw();
return () => {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
try {
splitter.disconnect();
analyserL.disconnect();
analyserR.disconnect();
} catch (e) {
// Ignore disconnection errors
}
};
}, [analyserNode]);
return (
<div className={cn('w-full h-full bg-card/50 border-2 border-accent/50 rounded-lg p-2', className)}>
<div className="text-[10px] font-bold text-accent uppercase tracking-wider mb-2">
Phase Correlation
</div>
<div className="w-full h-[calc(100%-24px)] rounded bg-muted/30 flex flex-col items-center justify-center">
<canvas
ref={canvasRef}
className="w-full h-16 rounded"
/>
<div className="text-[9px] text-muted-foreground mt-2 text-center px-2">
{correlation > 0.9 ? 'Mono-like' :
correlation > 0.5 ? 'Good Stereo' :
correlation > -0.5 ? 'Wide Stereo' :
'Phase Issues'}
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,134 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface SpectrogramProps {
analyserNode: AnalyserNode | null;
className?: string;
}
export function Spectrogram({ analyserNode, className }: SpectrogramProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const animationFrameRef = React.useRef<number | undefined>(undefined);
const spectrogramDataRef = React.useRef<ImageData | null>(null);
React.useEffect(() => {
if (!analyserNode || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Set canvas size
const dpr = window.devicePixelRatio || 1;
const rect = canvas.getBoundingClientRect();
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
ctx.scale(dpr, dpr);
const bufferLength = analyserNode.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
// Initialize spectrogram data
spectrogramDataRef.current = ctx.createImageData(rect.width, rect.height);
const draw = () => {
animationFrameRef.current = requestAnimationFrame(draw);
analyserNode.getByteFrequencyData(dataArray);
if (!spectrogramDataRef.current) return;
const imageData = spectrogramDataRef.current;
// Shift existing data to the left by 1 pixel
for (let y = 0; y < rect.height; y++) {
for (let x = 0; x < rect.width - 1; x++) {
const sourceIndex = ((y * rect.width) + x + 1) * 4;
const targetIndex = ((y * rect.width) + x) * 4;
imageData.data[targetIndex] = imageData.data[sourceIndex];
imageData.data[targetIndex + 1] = imageData.data[sourceIndex + 1];
imageData.data[targetIndex + 2] = imageData.data[sourceIndex + 2];
imageData.data[targetIndex + 3] = imageData.data[sourceIndex + 3];
}
}
// Add new column on the right
const x = rect.width - 1;
for (let y = 0; y < rect.height; y++) {
// Map frequency bins to canvas height (inverted)
const freqIndex = Math.floor((1 - y / rect.height) * bufferLength);
const value = dataArray[freqIndex];
// Color mapping with transparency: transparent (0) -> blue -> cyan -> green -> yellow -> red (255)
let r, g, b, a;
if (value < 64) {
// Transparent to blue
const t = value / 64;
r = 0;
g = 0;
b = Math.round(255 * t);
a = Math.round(255 * t);
} else if (value < 128) {
// Blue to cyan
r = 0;
g = (value - 64) * 4;
b = 255;
a = 255;
} else if (value < 192) {
// Cyan to green
r = 0;
g = 255;
b = 255 - (value - 128) * 4;
a = 255;
} else {
// Green to yellow to red
r = (value - 192) * 4;
g = 255;
b = 0;
a = 255;
}
const index = ((y * rect.width) + x) * 4;
imageData.data[index] = r;
imageData.data[index + 1] = g;
imageData.data[index + 2] = b;
imageData.data[index + 3] = a;
}
// Draw the spectrogram
ctx.putImageData(imageData, 0, 0);
// Draw frequency labels
ctx.fillStyle = 'rgba(255, 255, 255, 0.8)';
ctx.font = '10px monospace';
ctx.textAlign = 'left';
ctx.fillText('20kHz', 5, 12);
ctx.fillText('1kHz', 5, rect.height / 2);
ctx.fillText('20Hz', 5, rect.height - 5);
};
draw();
return () => {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
};
}, [analyserNode]);
return (
<div className={cn('w-full h-full bg-card/50 border-2 border-accent/50 rounded-lg p-2', className)}>
<div className="text-[10px] font-bold text-accent uppercase tracking-wider mb-2">
Spectrogram
</div>
<div className="w-full h-[calc(100%-24px)] rounded bg-muted/30">
<canvas
ref={canvasRef}
className="w-full h-full rounded"
/>
</div>
</div>
);
}

View File

@@ -0,0 +1,197 @@
'use client';
import * as React from 'react';
import { Eye, EyeOff, ChevronDown, ChevronUp, Copy, Clipboard } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { AutomationMode } from '@/types/automation';
export interface AutomationHeaderProps {
parameterName: string;
currentValue?: number;
visible: boolean;
mode: AutomationMode;
color?: string;
onToggleVisible?: () => void;
onModeChange?: (mode: AutomationMode) => void;
onHeightChange?: (delta: number) => void;
className?: string;
formatter?: (value: number) => string;
// Parameter selection
availableParameters?: Array<{ id: string; name: string }>;
selectedParameterId?: string;
onParameterChange?: (parameterId: string) => void;
// Copy/Paste automation
onCopyAutomation?: () => void;
onPasteAutomation?: () => void;
}
const MODE_LABELS: Record<AutomationMode, string> = {
read: 'R',
write: 'W',
touch: 'T',
latch: 'L',
};
const MODE_COLORS: Record<AutomationMode, string> = {
read: 'text-muted-foreground',
write: 'text-red-500',
touch: 'text-yellow-500',
latch: 'text-orange-500',
};
export function AutomationHeader({
parameterName,
currentValue,
visible,
mode,
color,
onToggleVisible,
onModeChange,
onHeightChange,
className,
formatter,
availableParameters,
selectedParameterId,
onParameterChange,
onCopyAutomation,
onPasteAutomation,
}: AutomationHeaderProps) {
const modes: AutomationMode[] = ['read', 'write', 'touch', 'latch'];
const currentModeIndex = modes.indexOf(mode);
const handleCycleModeClick = () => {
if (!onModeChange) return;
const nextIndex = (currentModeIndex + 1) % modes.length;
onModeChange(modes[nextIndex]);
};
const formatValue = (value: number) => {
if (formatter) return formatter(value);
return value.toFixed(2);
};
return (
<div
className={cn(
'flex items-center gap-2 px-3 py-1.5 bg-muted border-t border-b border-border/30 flex-shrink-0',
className
)}
>
{/* Automation label - always visible */}
<span className="text-xs font-medium flex-shrink-0">Automation</span>
{/* Color indicator */}
{color && (
<div
className="w-1 h-4 rounded-full flex-shrink-0"
style={{ backgroundColor: color }}
/>
)}
{/* Parameter name / selector */}
{availableParameters && availableParameters.length > 1 ? (
<select
value={selectedParameterId}
onChange={(e) => onParameterChange?.(e.target.value)}
className="text-xs font-medium text-foreground w-auto min-w-[120px] max-w-[200px] bg-background/50 border border-border/30 rounded px-1.5 py-0.5 hover:bg-background/80 focus:outline-none focus:ring-1 focus:ring-primary"
>
{availableParameters.map((param) => (
<option key={param.id} value={param.id}>
{param.name}
</option>
))}
</select>
) : (
<span className="text-xs font-medium text-foreground truncate">
{parameterName}
</span>
)}
{/* Current value display */}
{currentValue !== undefined && (
<span className="text-[10px] font-mono text-muted-foreground px-1.5 py-0.5 bg-background/50 rounded">
{formatValue(currentValue)}
</span>
)}
{/* Automation mode button */}
<Button
variant="ghost"
size="icon-sm"
onClick={handleCycleModeClick}
title={`Automation mode: ${mode} (click to cycle)`}
className={cn('h-5 w-5 text-[10px] font-bold flex-shrink-0', MODE_COLORS[mode])}
>
{MODE_LABELS[mode]}
</Button>
{/* Height controls */}
{onHeightChange && (
<div className="flex flex-col gap-0 flex-shrink-0">
<Button
variant="ghost"
size="icon-sm"
onClick={() => onHeightChange(20)}
title="Increase lane height"
className="h-3 w-4 p-0"
>
<ChevronUp className="h-2.5 w-2.5" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={() => onHeightChange(-20)}
title="Decrease lane height"
className="h-3 w-4 p-0"
>
<ChevronDown className="h-2.5 w-2.5" />
</Button>
</div>
)}
{/* Copy/Paste automation controls */}
{(onCopyAutomation || onPasteAutomation) && (
<div className="flex gap-1 flex-shrink-0 ml-auto mr-8">
{onCopyAutomation && (
<Button
variant="ghost"
size="icon-sm"
onClick={onCopyAutomation}
title="Copy automation data"
className="h-5 w-5"
>
<Copy className="h-3 w-3" />
</Button>
)}
{onPasteAutomation && (
<Button
variant="ghost"
size="icon-sm"
onClick={onPasteAutomation}
title="Paste automation data"
className="h-5 w-5"
>
<Clipboard className="h-3 w-3" />
</Button>
)}
</div>
)}
{/* Show/hide toggle - Positioned absolutely on the right */}
<Button
variant="ghost"
size="icon-sm"
onClick={onToggleVisible}
title={visible ? 'Hide automation' : 'Show automation'}
className="absolute right-2 h-5 w-5 flex-shrink-0"
>
{visible ? (
<Eye className="h-3 w-3" />
) : (
<EyeOff className="h-3 w-3 text-muted-foreground" />
)}
</Button>
</div>
);
}

View File

@@ -0,0 +1,317 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
import type { AutomationLane as AutomationLaneType, AutomationPoint as AutomationPointType } from '@/types/automation';
import { AutomationPoint } from './AutomationPoint';
export interface AutomationLaneProps {
lane: AutomationLaneType;
duration: number; // Total timeline duration in seconds
zoom: number; // Zoom factor
currentTime?: number; // Playhead position
onUpdateLane?: (updates: Partial<AutomationLaneType>) => void;
onAddPoint?: (time: number, value: number) => void;
onUpdatePoint?: (pointId: string, updates: Partial<AutomationPointType>) => void;
onRemovePoint?: (pointId: string) => void;
className?: string;
}
export function AutomationLane({
lane,
duration,
zoom,
currentTime = 0,
onUpdateLane,
onAddPoint,
onUpdatePoint,
onRemovePoint,
className,
}: AutomationLaneProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const containerRef = React.useRef<HTMLDivElement>(null);
const [selectedPointId, setSelectedPointId] = React.useState<string | null>(null);
const [isDraggingPoint, setIsDraggingPoint] = React.useState(false);
// Convert time to X pixel position
const timeToX = React.useCallback(
(time: number): number => {
if (!containerRef.current) return 0;
const width = containerRef.current.clientWidth;
return (time / duration) * width;
},
[duration]
);
// Convert value (0-1) to Y pixel position (inverted: 0 at bottom, 1 at top)
const valueToY = React.useCallback(
(value: number): number => {
if (!containerRef.current) return 0;
const height = lane.height;
return height * (1 - value);
},
[lane.height]
);
// Convert X pixel position to time
const xToTime = React.useCallback(
(x: number): number => {
if (!containerRef.current) return 0;
const width = containerRef.current.clientWidth;
return (x / width) * duration;
},
[duration]
);
// Convert Y pixel position to value (0-1)
const yToValue = React.useCallback(
(y: number): number => {
const height = lane.height;
return Math.max(0, Math.min(1, 1 - y / height));
},
[lane.height]
);
// Draw automation curve
React.useEffect(() => {
if (!canvasRef.current || !lane.visible) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
const dpr = window.devicePixelRatio || 1;
const rect = canvas.getBoundingClientRect();
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
ctx.scale(dpr, dpr);
const width = rect.width;
const height = rect.height;
// Clear canvas
ctx.clearRect(0, 0, width, height);
// Background
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('--color-background') || 'rgb(15, 23, 42)';
ctx.fillRect(0, 0, width, height);
// Grid lines (horizontal value guides)
ctx.strokeStyle = 'rgba(148, 163, 184, 0.1)';
ctx.lineWidth = 1;
for (let i = 0; i <= 4; i++) {
const y = (height / 4) * i;
ctx.beginPath();
ctx.moveTo(0, y);
ctx.lineTo(width, y);
ctx.stroke();
}
// Draw automation curve
if (lane.points.length > 0) {
const color = lane.color || 'rgb(59, 130, 246)';
ctx.strokeStyle = color;
ctx.lineWidth = 2;
ctx.beginPath();
// Sort points by time
const sortedPoints = [...lane.points].sort((a, b) => a.time - b.time);
// Draw lines between points
for (let i = 0; i < sortedPoints.length; i++) {
const point = sortedPoints[i];
const x = timeToX(point.time);
const y = valueToY(point.value);
if (i === 0) {
// Start from left edge at first point's value
ctx.moveTo(0, y);
ctx.lineTo(x, y);
} else {
const prevPoint = sortedPoints[i - 1];
const prevX = timeToX(prevPoint.time);
const prevY = valueToY(prevPoint.value);
if (point.curve === 'step') {
// Step curve: horizontal then vertical
ctx.lineTo(x, prevY);
ctx.lineTo(x, y);
} else {
// Linear curve (bezier not implemented yet)
ctx.lineTo(x, y);
}
}
// Extend to right edge from last point
if (i === sortedPoints.length - 1) {
ctx.lineTo(width, y);
}
}
ctx.stroke();
// Fill area under curve
ctx.globalAlpha = 0.2;
ctx.fillStyle = color;
ctx.lineTo(width, height);
ctx.lineTo(0, height);
ctx.closePath();
ctx.fill();
ctx.globalAlpha = 1.0;
}
// Draw playhead
if (currentTime >= 0 && duration > 0) {
const playheadX = timeToX(currentTime);
if (playheadX >= 0 && playheadX <= width) {
ctx.strokeStyle = 'rgba(239, 68, 68, 0.8)';
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(playheadX, 0);
ctx.lineTo(playheadX, height);
ctx.stroke();
}
}
}, [lane, duration, zoom, currentTime, timeToX, valueToY]);
// Handle canvas click to add point
const handleCanvasClick = React.useCallback(
(e: React.MouseEvent<HTMLCanvasElement>) => {
if (isDraggingPoint || !onAddPoint) return;
const rect = e.currentTarget.getBoundingClientRect();
const x = e.clientX - rect.left;
const y = e.clientY - rect.top;
const time = xToTime(x);
const value = yToValue(y);
onAddPoint(time, value);
},
[isDraggingPoint, onAddPoint, xToTime, yToValue]
);
// Handle point drag
const handlePointDragStart = React.useCallback((pointId: string) => {
setIsDraggingPoint(true);
setSelectedPointId(pointId);
}, []);
const handlePointDrag = React.useCallback(
(pointId: string, deltaX: number, deltaY: number) => {
if (!containerRef.current || !onUpdatePoint) return;
const point = lane.points.find((p) => p.id === pointId);
if (!point) return;
const rect = containerRef.current.getBoundingClientRect();
const width = rect.width;
// Calculate new time and value
const timePerPixel = duration / width;
const valuePerPixel = 1 / lane.height;
const newTime = Math.max(0, Math.min(duration, point.time + deltaX * timePerPixel));
const newValue = Math.max(0, Math.min(1, point.value - deltaY * valuePerPixel));
onUpdatePoint(pointId, { time: newTime, value: newValue });
},
[lane.points, lane.height, duration, onUpdatePoint]
);
const handlePointDragEnd = React.useCallback(() => {
setIsDraggingPoint(false);
}, []);
// Handle point click (select)
const handlePointClick = React.useCallback((pointId: string, event: React.MouseEvent) => {
event.stopPropagation();
setSelectedPointId(pointId);
}, []);
// Handle point double-click (delete)
const handlePointDoubleClick = React.useCallback(
(pointId: string) => {
if (onRemovePoint) {
onRemovePoint(pointId);
}
},
[onRemovePoint]
);
// Handle keyboard delete
React.useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
if ((e.key === 'Delete' || e.key === 'Backspace') && selectedPointId && onRemovePoint) {
e.preventDefault();
onRemovePoint(selectedPointId);
setSelectedPointId(null);
}
};
window.addEventListener('keydown', handleKeyDown);
return () => window.removeEventListener('keydown', handleKeyDown);
}, [selectedPointId, onRemovePoint]);
// Get current value at playhead (interpolated)
const getCurrentValue = React.useCallback((): number | undefined => {
if (lane.points.length === 0) return undefined;
const sortedPoints = [...lane.points].sort((a, b) => a.time - b.time);
// Find surrounding points
let prevPoint = sortedPoints[0];
let nextPoint = sortedPoints[sortedPoints.length - 1];
for (let i = 0; i < sortedPoints.length - 1; i++) {
if (sortedPoints[i].time <= currentTime && sortedPoints[i + 1].time >= currentTime) {
prevPoint = sortedPoints[i];
nextPoint = sortedPoints[i + 1];
break;
}
}
// Interpolate
if (currentTime <= prevPoint.time) return prevPoint.value;
if (currentTime >= nextPoint.time) return nextPoint.value;
const timeDelta = nextPoint.time - prevPoint.time;
const valueDelta = nextPoint.value - prevPoint.value;
const progress = (currentTime - prevPoint.time) / timeDelta;
return prevPoint.value + valueDelta * progress;
}, [lane.points, currentTime]);
if (!lane.visible) return null;
return (
<div
ref={containerRef}
className={cn('relative bg-background/30 overflow-hidden cursor-crosshair', className)}
style={{ height: lane.height }}
>
<canvas
ref={canvasRef}
className="absolute inset-0 w-full h-full"
onClick={handleCanvasClick}
/>
{/* Automation points */}
{lane.points.map((point) => (
<AutomationPoint
key={point.id}
point={point}
x={timeToX(point.time)}
y={valueToY(point.value)}
isSelected={selectedPointId === point.id}
onDragStart={handlePointDragStart}
onDrag={handlePointDrag}
onDragEnd={handlePointDragEnd}
onClick={handlePointClick}
onDoubleClick={handlePointDoubleClick}
/>
))}
</div>
);
}

View File

@@ -0,0 +1,123 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
import type { AutomationPoint as AutomationPointType } from '@/types/automation';
export interface AutomationPointProps {
point: AutomationPointType;
x: number; // Pixel position
y: number; // Pixel position
isSelected?: boolean;
onDragStart?: (pointId: string, startX: number, startY: number) => void;
onDrag?: (pointId: string, deltaX: number, deltaY: number) => void;
onDragEnd?: (pointId: string) => void;
onClick?: (pointId: string, event: React.MouseEvent) => void;
onDoubleClick?: (pointId: string) => void;
}
export function AutomationPoint({
point,
x,
y,
isSelected = false,
onDragStart,
onDrag,
onDragEnd,
onClick,
onDoubleClick,
}: AutomationPointProps) {
const [isDragging, setIsDragging] = React.useState(false);
const dragStartRef = React.useRef({ x: 0, y: 0 });
const handleMouseDown = React.useCallback(
(e: React.MouseEvent) => {
if (e.button !== 0) return; // Only left click
e.stopPropagation();
setIsDragging(true);
dragStartRef.current = { x: e.clientX, y: e.clientY };
if (onDragStart) {
onDragStart(point.id, e.clientX, e.clientY);
}
},
[point.id, onDragStart]
);
const handleClick = React.useCallback(
(e: React.MouseEvent) => {
if (!isDragging && onClick) {
onClick(point.id, e);
}
},
[isDragging, point.id, onClick]
);
const handleDoubleClick = React.useCallback(
(e: React.MouseEvent) => {
e.stopPropagation();
if (onDoubleClick) {
onDoubleClick(point.id);
}
},
[point.id, onDoubleClick]
);
// Global mouse handlers
React.useEffect(() => {
if (!isDragging) return;
const handleMouseMove = (e: MouseEvent) => {
if (!isDragging) return;
const deltaX = e.clientX - dragStartRef.current.x;
const deltaY = e.clientY - dragStartRef.current.y;
if (onDrag) {
onDrag(point.id, deltaX, deltaY);
}
// Update drag start position for next delta calculation
dragStartRef.current = { x: e.clientX, y: e.clientY };
};
const handleMouseUp = () => {
if (isDragging) {
setIsDragging(false);
if (onDragEnd) {
onDragEnd(point.id);
}
}
};
window.addEventListener('mousemove', handleMouseMove);
window.addEventListener('mouseup', handleMouseUp);
return () => {
window.removeEventListener('mousemove', handleMouseMove);
window.removeEventListener('mouseup', handleMouseUp);
};
}, [isDragging, point.id, onDrag, onDragEnd]);
return (
<div
className={cn(
'absolute rounded-full cursor-pointer transition-all select-none',
'hover:scale-125',
isDragging ? 'scale-125 z-10' : 'z-0',
isSelected
? 'w-3 h-3 bg-primary border-2 border-background shadow-lg'
: 'w-2.5 h-2.5 bg-primary/80 border border-background shadow-md'
)}
style={{
left: x - (isSelected || isDragging ? 6 : 5),
top: y - (isSelected || isDragging ? 6 : 5),
}}
onMouseDown={handleMouseDown}
onClick={handleClick}
onDoubleClick={handleDoubleClick}
title={`Time: ${point.time.toFixed(3)}s, Value: ${point.value.toFixed(3)}`}
/>
);
}

View File

@@ -0,0 +1,150 @@
'use client';
import * as React from 'react';
import { ChevronDown, ChevronUp } from 'lucide-react';
import { CircularKnob } from '@/components/ui/CircularKnob';
import { MasterFader } from './MasterFader';
import { cn } from '@/lib/utils/cn';
export interface MasterControlsProps {
volume: number;
pan: number;
peakLevel: number;
rmsLevel: number;
isClipping: boolean;
isMuted?: boolean;
collapsed?: boolean; // For collapsible on mobile/small screens
onVolumeChange: (volume: number) => void;
onPanChange: (pan: number) => void;
onMuteToggle: () => void;
onResetClip?: () => void;
onToggleCollapse?: () => void;
className?: string;
}
export function MasterControls({
volume,
pan,
peakLevel,
rmsLevel,
isClipping,
isMuted = false,
collapsed = false,
onVolumeChange,
onPanChange,
onMuteToggle,
onResetClip,
onToggleCollapse,
className,
}: MasterControlsProps) {
// Collapsed view - minimal controls
if (collapsed) {
return (
<div className={cn(
'flex flex-col items-center gap-2 px-3 py-2 bg-card/50 border border-accent/50 rounded-lg w-full',
className
)}>
<div className="flex items-center justify-between w-full">
<div className="text-xs font-bold text-accent uppercase tracking-wider">
Master
</div>
{onToggleCollapse && (
<button
onClick={onToggleCollapse}
className="p-1 hover:bg-accent/20 rounded transition-colors"
title="Expand master controls"
>
<ChevronDown className="h-3 w-3 text-muted-foreground" />
</button>
)}
</div>
<div className="flex items-center gap-2 w-full justify-center">
<button
onClick={onMuteToggle}
className={cn(
'h-7 w-7 rounded-md flex items-center justify-center transition-all text-xs font-bold',
isMuted
? 'bg-blue-500 text-white shadow-md shadow-blue-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title={isMuted ? 'Unmute' : 'Mute'}
>
M
</button>
<div className="flex-1 h-2 bg-muted rounded-full overflow-hidden">
<div
className={cn(
'h-full transition-all',
peakLevel > 0.95 ? 'bg-red-500' : peakLevel > 0.8 ? 'bg-yellow-500' : 'bg-green-500'
)}
style={{ width: `${peakLevel * 100}%` }}
/>
</div>
</div>
</div>
);
}
return (
<div className={cn(
'flex flex-col items-center gap-3 px-4 py-3 bg-card/50 border-2 border-accent/50 rounded-lg',
className
)}>
{/* Master Label with collapse button */}
<div className="flex items-center justify-between w-full">
<div className="text-[10px] font-bold text-accent uppercase tracking-wider flex-1 text-center">
Master
</div>
{onToggleCollapse && (
<button
onClick={onToggleCollapse}
className="p-0.5 hover:bg-accent/20 rounded transition-colors flex-shrink-0 lg:hidden"
title="Collapse master controls"
>
<ChevronUp className="h-3 w-3 text-muted-foreground" />
</button>
)}
</div>
{/* Pan Control */}
<CircularKnob
value={pan}
onChange={onPanChange}
min={-1}
max={1}
step={0.01}
label="PAN"
size={48}
formatValue={(value: number) => {
if (Math.abs(value) < 0.01) return 'C';
if (value < 0) return `${Math.abs(value * 100).toFixed(0)}L`;
return `${(value * 100).toFixed(0)}R`;
}}
/>
{/* Master Fader with Integrated Meters */}
<MasterFader
value={volume}
peakLevel={peakLevel}
rmsLevel={rmsLevel}
isClipping={isClipping}
onChange={onVolumeChange}
onResetClip={onResetClip}
/>
{/* Mute Button */}
<button
onClick={onMuteToggle}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-[11px] font-bold',
isMuted
? 'bg-blue-500 text-white shadow-md shadow-blue-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title={isMuted ? 'Unmute' : 'Mute'}
>
M
</button>
</div>
);
}

View File

@@ -0,0 +1,261 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface MasterFaderProps {
value: number;
peakLevel: number;
rmsLevel: number;
isClipping: boolean;
onChange: (value: number) => void;
onResetClip?: () => void;
onTouchStart?: () => void;
onTouchEnd?: () => void;
className?: string;
}
export function MasterFader({
value,
peakLevel,
rmsLevel,
isClipping,
onChange,
onResetClip,
onTouchStart,
onTouchEnd,
className,
}: MasterFaderProps) {
const [isDragging, setIsDragging] = React.useState(false);
const containerRef = React.useRef<HTMLDivElement>(null);
// Convert linear 0-1 to dB scale for display
const linearToDb = (linear: number): number => {
if (linear === 0) return -60;
const db = 20 * Math.log10(linear);
return Math.max(-60, Math.min(0, db));
};
const valueDb = linearToDb(value);
const peakDb = linearToDb(peakLevel);
const rmsDb = linearToDb(rmsLevel);
// Calculate bar widths (0-100%)
const peakWidth = ((peakDb + 60) / 60) * 100;
const rmsWidth = ((rmsDb + 60) / 60) * 100;
const handleMouseDown = (e: React.MouseEvent) => {
e.preventDefault();
setIsDragging(true);
onTouchStart?.();
updateValue(e.clientY);
};
const handleMouseMove = React.useCallback(
(e: MouseEvent) => {
if (!isDragging) return;
updateValue(e.clientY);
},
[isDragging]
);
const handleMouseUp = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
const handleTouchStart = (e: React.TouchEvent) => {
e.preventDefault();
const touch = e.touches[0];
setIsDragging(true);
onTouchStart?.();
updateValue(touch.clientY);
};
const handleTouchMove = React.useCallback(
(e: TouchEvent) => {
if (!isDragging || e.touches.length === 0) return;
const touch = e.touches[0];
updateValue(touch.clientY);
},
[isDragging]
);
const handleTouchEnd = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
const updateValue = (clientY: number) => {
if (!containerRef.current) return;
const rect = containerRef.current.getBoundingClientRect();
const y = clientY - rect.top;
// Track has 32px (2rem) padding on top and bottom (top-8 bottom-8)
const trackPadding = 32;
const trackHeight = rect.height - (trackPadding * 2);
// Clamp y to track bounds
const clampedY = Math.max(trackPadding, Math.min(rect.height - trackPadding, y));
// Inverted: top = max (1), bottom = min (0)
// Map clampedY from [trackPadding, height-trackPadding] to [1, 0]
const percentage = 1 - ((clampedY - trackPadding) / trackHeight);
onChange(Math.max(0, Math.min(1, percentage)));
};
React.useEffect(() => {
if (isDragging) {
window.addEventListener('mousemove', handleMouseMove);
window.addEventListener('mouseup', handleMouseUp);
window.addEventListener('touchmove', handleTouchMove);
window.addEventListener('touchend', handleTouchEnd);
return () => {
window.removeEventListener('mousemove', handleMouseMove);
window.removeEventListener('mouseup', handleMouseUp);
window.removeEventListener('touchmove', handleTouchMove);
window.removeEventListener('touchend', handleTouchEnd);
};
}
}, [isDragging, handleMouseMove, handleMouseUp, handleTouchMove, handleTouchEnd]);
return (
<div className={cn('flex gap-3', className)} style={{ marginLeft: '16px' }}>
{/* dB Labels (Left) */}
<div className="flex flex-col justify-between text-[10px] font-mono text-muted-foreground py-1">
<span>0</span>
<span>-12</span>
<span>-24</span>
<span>-60</span>
</div>
{/* Fader Container */}
<div
ref={containerRef}
className="relative w-12 h-40 bg-background/50 rounded-md border border-border/50 cursor-pointer"
onMouseDown={handleMouseDown}
onTouchStart={handleTouchStart}
>
{/* Peak Meter (Horizontal Bar - Top) */}
<div className="absolute inset-x-2 top-2 h-3 bg-background/80 rounded-sm overflow-hidden border border-border/30">
<div
className="absolute left-0 top-0 bottom-0 transition-all duration-75 ease-out"
style={{ width: `${Math.max(0, Math.min(100, peakWidth))}%` }}
>
<div className={cn(
'w-full h-full',
peakDb > -3 ? 'bg-red-500' :
peakDb > -6 ? 'bg-yellow-500' :
'bg-green-500'
)} />
</div>
</div>
{/* RMS Meter (Horizontal Bar - Bottom) */}
<div className="absolute inset-x-2 bottom-2 h-3 bg-background/80 rounded-sm overflow-hidden border border-border/30">
<div
className="absolute left-0 top-0 bottom-0 transition-all duration-150 ease-out"
style={{ width: `${Math.max(0, Math.min(100, rmsWidth))}%` }}
>
<div className={cn(
'w-full h-full',
rmsDb > -3 ? 'bg-red-500' :
rmsDb > -6 ? 'bg-yellow-500' :
'bg-green-500'
)} />
</div>
</div>
{/* Fader Track */}
<div className="absolute top-8 bottom-8 left-1/2 -translate-x-1/2 w-1.5 bg-muted/50 rounded-full" />
{/* Fader Handle */}
<div
className="absolute left-1/2 -translate-x-1/2 w-10 h-4 bg-primary/80 border-2 border-primary rounded-md shadow-lg cursor-grab active:cursor-grabbing pointer-events-none transition-all"
style={{
// Inverted: value 1 = top of track (20%), value 0 = bottom of track (80%)
// Track has top-8 bottom-8 padding (20% and 80% of h-40 container)
// Handle moves within 60% range (from 20% to 80%)
top: `calc(${20 + (1 - value) * 60}% - 0.5rem)`,
}}
>
{/* Handle grip lines */}
<div className="absolute inset-0 flex items-center justify-center gap-0.5">
<div className="h-2 w-px bg-primary-foreground/30" />
<div className="h-2 w-px bg-primary-foreground/30" />
<div className="h-2 w-px bg-primary-foreground/30" />
</div>
</div>
{/* Clip Indicator */}
{isClipping && (
<button
onClick={onResetClip}
className="absolute top-0 left-0 right-0 px-1 py-0.5 text-[9px] font-bold text-white bg-red-500 border-b border-red-600 rounded-t-md shadow-lg shadow-red-500/50 animate-pulse"
title="Click to reset clip indicator"
>
CLIP
</button>
)}
{/* dB Scale Markers */}
<div className="absolute inset-0 px-2 py-8 pointer-events-none">
<div className="relative h-full">
{/* -12 dB */}
<div className="absolute left-0 right-0 h-px bg-border/20" style={{ top: '50%' }} />
{/* -6 dB */}
<div className="absolute left-0 right-0 h-px bg-yellow-500/20" style={{ top: '20%' }} />
{/* -3 dB */}
<div className="absolute left-0 right-0 h-px bg-red-500/30" style={{ top: '10%' }} />
</div>
</div>
</div>
{/* Value and Level Display (Right) */}
<div className="flex flex-col justify-between items-start text-[9px] font-mono py-1 w-[36px]">
{/* Current dB Value */}
<div className={cn(
'font-bold text-[11px]',
valueDb > -3 ? 'text-red-500' :
valueDb > -6 ? 'text-yellow-500' :
'text-green-500'
)}>
{valueDb > -60 ? `${valueDb.toFixed(1)}` : '-∞'}
</div>
{/* Spacer */}
<div className="flex-1" />
{/* Peak Level */}
<div className="flex flex-col items-start">
<span className="text-muted-foreground/60">PK</span>
<span className={cn(
'font-mono text-[10px]',
peakDb > -3 ? 'text-red-500' :
peakDb > -6 ? 'text-yellow-500' :
'text-green-500'
)}>
{peakDb > -60 ? `${peakDb.toFixed(1)}` : '-∞'}
</span>
</div>
{/* RMS Level */}
<div className="flex flex-col items-start">
<span className="text-muted-foreground/60">RM</span>
<span className={cn(
'font-mono text-[10px]',
rmsDb > -3 ? 'text-red-500' :
rmsDb > -6 ? 'text-yellow-500' :
'text-green-500'
)}>
{rmsDb > -60 ? `${rmsDb.toFixed(1)}` : '-∞'}
</span>
</div>
{/* dB Label */}
<span className="text-muted-foreground/60 text-[8px]">dB</span>
</div>
</div>
);
}

View File

@@ -0,0 +1,120 @@
'use client';
import * as React from 'react';
export interface MasterMeterProps {
/** Peak level (0-1) */
peakLevel: number;
/** RMS level (0-1) */
rmsLevel: number;
/** Whether clipping has occurred */
isClipping: boolean;
/** Callback to reset clip indicator */
onResetClip?: () => void;
}
export function MasterMeter({
peakLevel,
rmsLevel,
isClipping,
onResetClip,
}: MasterMeterProps) {
// Convert linear 0-1 to dB scale for display
const linearToDb = (linear: number): number => {
if (linear === 0) return -60;
const db = 20 * Math.log10(linear);
return Math.max(-60, Math.min(0, db));
};
const peakDb = linearToDb(peakLevel);
const rmsDb = linearToDb(rmsLevel);
// Calculate bar heights (0-100%)
const peakHeight = ((peakDb + 60) / 60) * 100;
const rmsHeight = ((rmsDb + 60) / 60) * 100;
return (
<div className="flex items-center gap-2 px-3 py-2 bg-muted/30 rounded-md border border-border/50">
{/* Clip Indicator */}
<button
onClick={onResetClip}
className={`w-6 h-6 rounded-sm border transition-colors ${
isClipping
? 'bg-red-500 border-red-600 shadow-lg shadow-red-500/50'
: 'bg-muted border-border/50'
}`}
title={isClipping ? 'Click to reset clip indicator' : 'No clipping'}
>
<span className="text-[10px] font-bold text-white">C</span>
</button>
{/* Meters */}
<div className="flex gap-1">
{/* Peak Meter (Left) */}
<div className="w-6 h-24 bg-background/50 rounded-sm relative overflow-hidden border border-border/50">
<div className="absolute bottom-0 left-0 right-0 transition-all duration-75 ease-out"
style={{ height: `${Math.max(0, Math.min(100, peakHeight))}%` }}>
<div className={`w-full h-full ${
peakDb > -3 ? 'bg-red-500' :
peakDb > -6 ? 'bg-yellow-500' :
'bg-green-500'
}`} />
</div>
{/* dB markers */}
<div className="absolute inset-0 pointer-events-none">
<div className="absolute top-0 left-0 right-0 h-px bg-red-500/30" title="0 dB" />
<div className="absolute top-[5%] left-0 right-0 h-px bg-yellow-500/20" title="-3 dB" />
<div className="absolute top-[10%] left-0 right-0 h-px bg-border/20" title="-6 dB" />
<div className="absolute top-[25%] left-0 right-0 h-px bg-border/20" title="-12 dB" />
<div className="absolute top-[50%] left-0 right-0 h-px bg-border/20" title="-18 dB" />
</div>
</div>
{/* RMS Meter (Right) */}
<div className="w-6 h-24 bg-background/50 rounded-sm relative overflow-hidden border border-border/50">
<div className="absolute bottom-0 left-0 right-0 transition-all duration-150 ease-out"
style={{ height: `${Math.max(0, Math.min(100, rmsHeight))}%` }}>
<div className={`w-full h-full ${
rmsDb > -3 ? 'bg-red-400' :
rmsDb > -6 ? 'bg-yellow-400' :
'bg-green-400'
}`} />
</div>
{/* dB markers */}
<div className="absolute inset-0 pointer-events-none">
<div className="absolute top-0 left-0 right-0 h-px bg-red-500/30" />
<div className="absolute top-[5%] left-0 right-0 h-px bg-yellow-500/20" />
<div className="absolute top-[10%] left-0 right-0 h-px bg-border/20" />
<div className="absolute top-[25%] left-0 right-0 h-px bg-border/20" />
<div className="absolute top-[50%] left-0 right-0 h-px bg-border/20" />
</div>
</div>
</div>
{/* Labels and Values */}
<div className="flex flex-col text-[10px] font-mono">
<div className="flex items-center gap-1">
<span className="text-muted-foreground w-6">PK:</span>
<span className={`w-12 text-right ${
peakDb > -3 ? 'text-red-500' :
peakDb > -6 ? 'text-yellow-500' :
'text-green-500'
}`}>
{peakDb > -60 ? `${peakDb.toFixed(1)}` : '-∞'}
</span>
</div>
<div className="flex items-center gap-1">
<span className="text-muted-foreground w-6">RM:</span>
<span className={`w-12 text-right ${
rmsDb > -3 ? 'text-red-400' :
rmsDb > -6 ? 'text-yellow-400' :
'text-green-400'
}`}>
{rmsDb > -60 ? `${rmsDb.toFixed(1)}` : '-∞'}
</span>
</div>
<div className="text-muted-foreground text-center mt-0.5">dB</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,130 @@
'use client';
import * as React from 'react';
import { AlertTriangle, XCircle, Info, X } from 'lucide-react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { getBrowserInfo } from '@/lib/utils/browser-compat';
interface BrowserCompatDialogProps {
open: boolean;
missingFeatures: string[];
warnings: string[];
onClose: () => void;
}
export function BrowserCompatDialog({
open,
missingFeatures,
warnings,
onClose,
}: BrowserCompatDialogProps) {
const [browserInfo, setBrowserInfo] = React.useState({ name: 'Unknown', version: 'Unknown' });
const hasErrors = missingFeatures.length > 0;
// Get browser info only on client side
React.useEffect(() => {
setBrowserInfo(getBrowserInfo());
}, []);
if (!open) return null;
return (
<Modal open={open} onClose={onClose} title="">
<div className="p-6 max-w-md">
{/* Header */}
<div className="flex items-start justify-between mb-4">
<div className="flex items-center gap-2">
{hasErrors ? (
<>
<XCircle className="h-5 w-5 text-destructive" />
<h2 className="text-lg font-semibold">Browser Not Supported</h2>
</>
) : (
<>
<AlertTriangle className="h-5 w-5 text-yellow-500" />
<h2 className="text-lg font-semibold">Browser Warnings</h2>
</>
)}
</div>
<button onClick={onClose} className="text-muted-foreground hover:text-foreground">
<X className="h-4 w-4" />
</button>
</div>
<p className="text-sm text-muted-foreground mb-4">
{hasErrors ? (
<>Your browser is missing required features to run this audio editor.</>
) : (
<>Some features may not work as expected in your browser.</>
)}
</p>
<div className="space-y-4">
{/* Browser Info */}
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Info className="h-4 w-4" />
<span>
{browserInfo.name} {browserInfo.version}
</span>
</div>
{/* Missing Features */}
{missingFeatures.length > 0 && (
<div className="space-y-2">
<h3 className="text-sm font-semibold text-destructive flex items-center gap-2">
<XCircle className="h-4 w-4" />
Missing Required Features:
</h3>
<ul className="list-disc list-inside space-y-1 text-sm text-muted-foreground">
{missingFeatures.map((feature) => (
<li key={feature}>{feature}</li>
))}
</ul>
</div>
)}
{/* Warnings */}
{warnings.length > 0 && (
<div className="space-y-2">
<h3 className="text-sm font-semibold text-yellow-600 dark:text-yellow-500 flex items-center gap-2">
<AlertTriangle className="h-4 w-4" />
Warnings:
</h3>
<ul className="list-disc list-inside space-y-1 text-sm text-muted-foreground">
{warnings.map((warning) => (
<li key={warning}>{warning}</li>
))}
</ul>
</div>
)}
{/* Recommendations */}
{hasErrors && (
<div className="bg-muted/50 border border-border rounded-md p-3 space-y-2">
<h3 className="text-sm font-semibold">Recommended Browsers:</h3>
<ul className="text-sm text-muted-foreground space-y-1">
<li> Chrome 90+ or Edge 90+</li>
<li> Firefox 88+</li>
<li> Safari 14+</li>
</ul>
</div>
)}
{/* Actions */}
<div className="flex justify-end gap-2">
{hasErrors ? (
<Button onClick={onClose} variant="destructive">
Close
</Button>
) : (
<Button onClick={onClose}>
Continue Anyway
</Button>
)}
</div>
</div>
</div>
</Modal>
);
}

View File

@@ -0,0 +1,210 @@
'use client';
import * as React from 'react';
import { X, Download } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
export interface ExportSettings {
format: 'wav' | 'mp3';
scope: 'project' | 'selection' | 'tracks'; // Export scope
bitDepth: 16 | 24 | 32;
bitrate: number; // For MP3: 128, 192, 256, 320 kbps
normalize: boolean;
filename: string;
}
export interface ExportDialogProps {
open: boolean;
onClose: () => void;
onExport: (settings: ExportSettings) => void;
isExporting?: boolean;
hasSelection?: boolean; // Whether any track has a selection
}
export function ExportDialog({ open, onClose, onExport, isExporting, hasSelection }: ExportDialogProps) {
const [settings, setSettings] = React.useState<ExportSettings>({
format: 'wav',
scope: 'project',
bitDepth: 16,
bitrate: 192, // Default MP3 bitrate
normalize: true,
filename: 'mix',
});
const handleExport = () => {
onExport(settings);
};
if (!open) return null;
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-black/50">
<div className="bg-card border border-border rounded-lg shadow-xl w-full max-w-md p-6">
{/* Header */}
<div className="flex items-center justify-between mb-6">
<h2 className="text-lg font-semibold text-foreground">Export Audio</h2>
<button
onClick={onClose}
className="text-muted-foreground hover:text-foreground transition-colors"
disabled={isExporting}
>
<X className="h-5 w-5" />
</button>
</div>
{/* Settings */}
<div className="space-y-4">
{/* Filename */}
<div>
<label className="block text-sm font-medium text-foreground mb-2">
Filename
</label>
<input
type="text"
value={settings.filename}
onChange={(e) => setSettings({ ...settings, filename: e.target.value })}
className="w-full px-3 py-2 bg-background border border-border rounded text-foreground focus:outline-none focus:ring-2 focus:ring-primary"
disabled={isExporting}
/>
<p className="text-xs text-muted-foreground mt-1">
.{settings.format} will be added automatically
</p>
</div>
{/* Format */}
<div>
<label className="block text-sm font-medium text-foreground mb-2">
Format
</label>
<select
value={settings.format}
onChange={(e) => setSettings({ ...settings, format: e.target.value as 'wav' | 'mp3' })}
className="w-full px-3 py-2 bg-background border border-border rounded text-foreground focus:outline-none focus:ring-2 focus:ring-primary"
disabled={isExporting}
>
<option value="wav">WAV (Lossless, Uncompressed)</option>
<option value="mp3">MP3 (Lossy, Compressed)</option>
</select>
</div>
{/* Export Scope */}
<div>
<label className="block text-sm font-medium text-foreground mb-2">
Export Scope
</label>
<select
value={settings.scope}
onChange={(e) => setSettings({ ...settings, scope: e.target.value as 'project' | 'selection' | 'tracks' })}
className="w-full px-3 py-2 bg-background border border-border rounded text-foreground focus:outline-none focus:ring-2 focus:ring-primary"
disabled={isExporting}
>
<option value="project">Entire Project (Mix All Tracks)</option>
<option value="selection" disabled={!hasSelection}>
Selected Region {!hasSelection && '(No selection)'}
</option>
<option value="tracks">Individual Tracks (Separate Files)</option>
</select>
<p className="text-xs text-muted-foreground mt-1">
{settings.scope === 'project' && 'Mix all tracks into a single file'}
{settings.scope === 'selection' && 'Export only the selected region'}
{settings.scope === 'tracks' && 'Export each track as a separate file'}
</p>
</div>
{/* Bit Depth (WAV only) */}
{settings.format === 'wav' && (
<div>
<label className="block text-sm font-medium text-foreground mb-2">
Bit Depth
</label>
<div className="flex gap-2">
{[16, 24, 32].map((depth) => (
<button
key={depth}
onClick={() => setSettings({ ...settings, bitDepth: depth as 16 | 24 | 32 })}
className={cn(
'flex-1 px-3 py-2 rounded text-sm font-medium transition-colors',
settings.bitDepth === depth
? 'bg-primary text-primary-foreground'
: 'bg-background border border-border text-foreground hover:bg-accent'
)}
disabled={isExporting}
>
{depth}-bit {depth === 32 && '(Float)'}
</button>
))}
</div>
</div>
)}
{/* MP3 Bitrate */}
{settings.format === 'mp3' && (
<div>
<label className="block text-sm font-medium text-foreground mb-2">
Bitrate
</label>
<div className="flex gap-2">
{[128, 192, 256, 320].map((rate) => (
<button
key={rate}
onClick={() => setSettings({ ...settings, bitrate: rate })}
className={cn(
'flex-1 px-3 py-2 rounded text-sm font-medium transition-colors',
settings.bitrate === rate
? 'bg-primary text-primary-foreground'
: 'bg-background border border-border text-foreground hover:bg-accent'
)}
disabled={isExporting}
>
{rate} kbps
</button>
))}
</div>
</div>
)}
{/* Normalize */}
<div>
<label className="flex items-center gap-2 cursor-pointer">
<input
type="checkbox"
checked={settings.normalize}
onChange={(e) => setSettings({ ...settings, normalize: e.target.checked })}
className="w-4 h-4 rounded border-border text-primary focus:ring-primary"
disabled={isExporting}
/>
<span className="text-sm font-medium text-foreground">
Normalize audio
</span>
</label>
<p className="text-xs text-muted-foreground mt-1 ml-6">
Prevents clipping by adjusting peak levels
</p>
</div>
</div>
{/* Actions */}
<div className="flex gap-3 mt-6">
<Button
variant="outline"
onClick={onClose}
className="flex-1"
disabled={isExporting}
>
Cancel
</Button>
<Button
onClick={handleExport}
className="flex-1"
disabled={isExporting || !settings.filename.trim()}
>
<Download className="h-4 w-4 mr-2" />
{isExporting ? 'Exporting...' : 'Export'}
</Button>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,156 @@
'use client';
import { useState } from 'react';
import { ImportOptions } from '@/lib/audio/decoder';
export interface ImportDialogProps {
open: boolean;
onClose: () => void;
onImport: (options: ImportOptions) => void;
fileName?: string;
sampleRate?: number;
channels?: number;
}
export function ImportDialog({
open,
onClose,
onImport,
fileName,
sampleRate: originalSampleRate,
channels: originalChannels,
}: ImportDialogProps) {
// Don't render if not open
if (!open) return null;
const [options, setOptions] = useState<ImportOptions>({
convertToMono: false,
targetSampleRate: undefined,
normalizeOnImport: false,
});
const handleImport = () => {
onImport(options);
};
const sampleRateOptions = [44100, 48000, 88200, 96000, 176400, 192000];
return (
<div className="fixed inset-0 bg-black/50 flex items-center justify-center z-50">
<div className="bg-white dark:bg-gray-800 rounded-lg p-6 w-full max-w-md shadow-xl">
<h2 className="text-xl font-bold mb-4 text-gray-900 dark:text-white">
Import Audio File
</h2>
<div className="mb-4">
<div className="text-sm text-gray-600 dark:text-gray-400 mb-2">
<strong>File:</strong> {fileName}
</div>
{originalSampleRate && (
<div className="text-sm text-gray-600 dark:text-gray-400 mb-1">
<strong>Sample Rate:</strong> {originalSampleRate} Hz
</div>
)}
{originalChannels && (
<div className="text-sm text-gray-600 dark:text-gray-400 mb-3">
<strong>Channels:</strong> {originalChannels === 1 ? 'Mono' : originalChannels === 2 ? 'Stereo' : `${originalChannels} channels`}
</div>
)}
</div>
<div className="space-y-4">
{/* Convert to Mono */}
{originalChannels && originalChannels > 1 && (
<div>
<label className="flex items-center space-x-2">
<input
type="checkbox"
checked={options.convertToMono}
onChange={(e) => setOptions({ ...options, convertToMono: e.target.checked })}
className="rounded border-gray-300 text-blue-600 focus:ring-blue-500"
/>
<span className="text-sm text-gray-700 dark:text-gray-300">
Convert to Mono
</span>
</label>
<p className="text-xs text-gray-500 dark:text-gray-400 mt-1 ml-6">
Mix all channels equally into a single mono channel
</p>
</div>
)}
{/* Resample */}
<div>
<label className="flex items-center space-x-2 mb-2">
<input
type="checkbox"
checked={options.targetSampleRate !== undefined}
onChange={(e) => setOptions({
...options,
targetSampleRate: e.target.checked ? 48000 : undefined
})}
className="rounded border-gray-300 text-blue-600 focus:ring-blue-500"
/>
<span className="text-sm text-gray-700 dark:text-gray-300">
Resample Audio
</span>
</label>
{options.targetSampleRate !== undefined && (
<select
value={options.targetSampleRate}
onChange={(e) => setOptions({
...options,
targetSampleRate: parseInt(e.target.value)
})}
className="ml-6 w-full max-w-xs px-3 py-1.5 text-sm border border-gray-300 dark:border-gray-600 rounded bg-white dark:bg-gray-700 text-gray-900 dark:text-white focus:outline-none focus:ring-2 focus:ring-blue-500"
>
{sampleRateOptions.map((rate) => (
<option key={rate} value={rate}>
{rate} Hz {rate === originalSampleRate ? '(original)' : ''}
</option>
))}
</select>
)}
<p className="text-xs text-gray-500 dark:text-gray-400 mt-1 ml-6">
Convert to a different sample rate (may affect quality)
</p>
</div>
{/* Normalize */}
<div>
<label className="flex items-center space-x-2">
<input
type="checkbox"
checked={options.normalizeOnImport}
onChange={(e) => setOptions({ ...options, normalizeOnImport: e.target.checked })}
className="rounded border-gray-300 text-blue-600 focus:ring-blue-500"
/>
<span className="text-sm text-gray-700 dark:text-gray-300">
Normalize on Import
</span>
</label>
<p className="text-xs text-gray-500 dark:text-gray-400 mt-1 ml-6">
Adjust peak amplitude to 99% (1% headroom)
</p>
</div>
</div>
<div className="flex justify-end space-x-3 mt-6">
<button
onClick={onClose}
className="px-4 py-2 text-sm font-medium text-gray-700 dark:text-gray-300 bg-gray-100 dark:bg-gray-700 hover:bg-gray-200 dark:hover:bg-gray-600 rounded transition-colors"
>
Cancel
</button>
<button
onClick={handleImport}
className="px-4 py-2 text-sm font-medium text-white bg-blue-600 hover:bg-blue-700 rounded transition-colors"
>
Import
</button>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,140 @@
'use client';
import * as React from 'react';
import { Keyboard, X } from 'lucide-react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
export interface KeyboardShortcutsDialogProps {
open: boolean;
onClose: () => void;
}
interface ShortcutCategory {
name: string;
shortcuts: Array<{
keys: string[];
description: string;
}>;
}
const SHORTCUTS: ShortcutCategory[] = [
{
name: 'Playback',
shortcuts: [
{ keys: ['Space'], description: 'Play / Pause' },
{ keys: ['Home'], description: 'Go to Start' },
{ keys: ['End'], description: 'Go to End' },
{ keys: ['←'], description: 'Seek Backward' },
{ keys: ['→'], description: 'Seek Forward' },
{ keys: ['Ctrl', '←'], description: 'Seek Backward 5s' },
{ keys: ['Ctrl', '→'], description: 'Seek Forward 5s' },
],
},
{
name: 'Edit',
shortcuts: [
{ keys: ['Ctrl', 'Z'], description: 'Undo' },
{ keys: ['Ctrl', 'Shift', 'Z'], description: 'Redo' },
{ keys: ['Ctrl', 'X'], description: 'Cut' },
{ keys: ['Ctrl', 'C'], description: 'Copy' },
{ keys: ['Ctrl', 'V'], description: 'Paste' },
{ keys: ['Delete'], description: 'Delete Selection' },
{ keys: ['Ctrl', 'D'], description: 'Duplicate' },
{ keys: ['Ctrl', 'A'], description: 'Select All' },
],
},
{
name: 'View',
shortcuts: [
{ keys: ['Ctrl', '+'], description: 'Zoom In' },
{ keys: ['Ctrl', '-'], description: 'Zoom Out' },
{ keys: ['Ctrl', '0'], description: 'Fit to View' },
],
},
{
name: 'File',
shortcuts: [
{ keys: ['Ctrl', 'S'], description: 'Save Project' },
{ keys: ['Ctrl', 'K'], description: 'Open Command Palette' },
],
},
];
function KeyboardKey({ keyName }: { keyName: string }) {
return (
<kbd className="px-2 py-1 text-xs font-semibold bg-muted border border-border rounded shadow-sm min-w-[2rem] text-center inline-block">
{keyName}
</kbd>
);
}
export function KeyboardShortcutsDialog({ open, onClose }: KeyboardShortcutsDialogProps) {
if (!open) return null;
return (
<Modal open={open} onClose={onClose} title="">
<div className="p-6 max-w-2xl">
{/* Header */}
<div className="flex items-start justify-between mb-6">
<div className="flex items-center gap-3">
<Keyboard className="h-6 w-6 text-primary" />
<h2 className="text-xl font-semibold">Keyboard Shortcuts</h2>
</div>
<button onClick={onClose} className="text-muted-foreground hover:text-foreground">
<X className="h-5 w-5" />
</button>
</div>
{/* Shortcuts Grid */}
<div className="grid grid-cols-1 md:grid-cols-2 gap-6">
{SHORTCUTS.map((category) => (
<div key={category.name} className="space-y-3">
<h3 className="text-sm font-semibold text-primary border-b border-border pb-2">
{category.name}
</h3>
<div className="space-y-2">
{category.shortcuts.map((shortcut, index) => (
<div
key={index}
className="flex items-center justify-between gap-4 py-1.5"
>
<span className="text-sm text-foreground flex-1">
{shortcut.description}
</span>
<div className="flex items-center gap-1 flex-shrink-0">
{shortcut.keys.map((key, keyIndex) => (
<React.Fragment key={keyIndex}>
{keyIndex > 0 && (
<span className="text-muted-foreground text-xs mx-0.5">+</span>
)}
<KeyboardKey keyName={key} />
</React.Fragment>
))}
</div>
</div>
))}
</div>
</div>
))}
</div>
{/* Footer */}
<div className="mt-6 pt-4 border-t border-border">
<p className="text-xs text-muted-foreground text-center">
Press <KeyboardKey keyName="Ctrl" /> + <KeyboardKey keyName="K" /> to open the
command palette and search for more actions
</p>
</div>
{/* Close Button */}
<div className="mt-6 flex justify-end">
<Button onClick={onClose} variant="default">
Close
</Button>
</div>
</div>
</Modal>
);
}

View File

@@ -0,0 +1,101 @@
'use client';
import * as React from 'react';
import { AlertTriangle, Info, X } from 'lucide-react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { formatMemorySize } from '@/lib/utils/memory-limits';
interface MemoryWarningDialogProps {
open: boolean;
estimatedMemoryMB: number;
availableMemoryMB?: number;
warning: string;
fileName?: string;
onContinue: () => void;
onCancel: () => void;
}
export function MemoryWarningDialog({
open,
estimatedMemoryMB,
availableMemoryMB,
warning,
fileName,
onContinue,
onCancel,
}: MemoryWarningDialogProps) {
if (!open) return null;
const estimatedBytes = estimatedMemoryMB * 1024 * 1024;
const availableBytes = availableMemoryMB ? availableMemoryMB * 1024 * 1024 : undefined;
return (
<Modal open={open} onClose={onCancel} title="">
<div className="p-6 max-w-md">
{/* Header */}
<div className="flex items-start justify-between mb-4">
<div className="flex items-center gap-2">
<AlertTriangle className="h-5 w-5 text-yellow-500" />
<h2 className="text-lg font-semibold">Memory Warning</h2>
</div>
<button onClick={onCancel} className="text-muted-foreground hover:text-foreground">
<X className="h-4 w-4" />
</button>
</div>
<p className="text-sm text-muted-foreground mb-4">
{warning}
</p>
<div className="space-y-4">
{/* File Info */}
{fileName && (
<div className="flex items-center gap-2 text-sm">
<Info className="h-4 w-4 text-muted-foreground" />
<span className="font-medium">File:</span>
<span className="text-muted-foreground truncate">{fileName}</span>
</div>
)}
{/* Memory Details */}
<div className="bg-muted/50 border border-border rounded-md p-3 space-y-2">
<div className="flex justify-between text-sm">
<span className="text-muted-foreground">Estimated Memory:</span>
<span className="font-medium">{formatMemorySize(estimatedBytes)}</span>
</div>
{availableBytes && (
<div className="flex justify-between text-sm">
<span className="text-muted-foreground">Available Memory:</span>
<span className="font-medium">{formatMemorySize(availableBytes)}</span>
</div>
)}
</div>
{/* Warning Message */}
<div className="bg-yellow-500/10 border border-yellow-500/20 rounded-md p-3">
<p className="text-sm text-yellow-700 dark:text-yellow-400">
<strong>Note:</strong> Loading large files may cause performance issues or browser crashes,
especially on devices with limited memory. Consider:
</p>
<ul className="mt-2 text-sm text-yellow-700 dark:text-yellow-400 space-y-1 list-disc list-inside">
<li>Closing other browser tabs</li>
<li>Using a shorter audio file</li>
<li>Splitting large files into smaller segments</li>
</ul>
</div>
{/* Actions */}
<div className="flex justify-end gap-2">
<Button onClick={onCancel} variant="outline">
Cancel
</Button>
<Button onClick={onContinue} variant="default">
Continue Anyway
</Button>
</div>
</div>
</div>
</Modal>
);
}

View File

@@ -0,0 +1,162 @@
'use client';
import * as React from 'react';
import { X, Plus, Trash2, Copy, FolderOpen, Download, Upload } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import type { ProjectMetadata } from '@/lib/storage/db';
import { formatDuration } from '@/lib/audio/decoder';
export interface ProjectsDialogProps {
open: boolean;
onClose: () => void;
projects: ProjectMetadata[];
onNewProject: () => void;
onLoadProject: (projectId: string) => void;
onDeleteProject: (projectId: string) => void;
onDuplicateProject: (projectId: string) => void;
onExportProject: (projectId: string) => void;
onImportProject: () => void;
}
export function ProjectsDialog({
open,
onClose,
projects,
onNewProject,
onLoadProject,
onDeleteProject,
onDuplicateProject,
onExportProject,
onImportProject,
}: ProjectsDialogProps) {
if (!open) return null;
const formatDate = (timestamp: number) => {
return new Date(timestamp).toLocaleString(undefined, {
year: 'numeric',
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
});
};
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-black/50">
<div className="bg-card border border-border rounded-lg shadow-xl w-full max-w-3xl max-h-[80vh] flex flex-col">
{/* Header */}
<div className="flex items-center justify-between p-6 border-b border-border">
<h2 className="text-lg font-semibold text-foreground">Projects</h2>
<div className="flex items-center gap-2">
<Button
onClick={onImportProject}
variant="outline"
size="sm"
className="gap-2"
>
<Upload className="h-4 w-4" />
Import
</Button>
<Button
onClick={onNewProject}
variant="default"
size="sm"
className="gap-2"
>
<Plus className="h-4 w-4" />
New Project
</Button>
<button
onClick={onClose}
className="text-muted-foreground hover:text-foreground transition-colors"
>
<X className="h-5 w-5" />
</button>
</div>
</div>
{/* Projects List */}
<div className="flex-1 overflow-y-auto custom-scrollbar p-6">
{projects.length === 0 ? (
<div className="flex flex-col items-center justify-center py-12 text-center">
<FolderOpen className="h-16 w-16 text-muted-foreground mb-4" />
<h3 className="text-lg font-medium text-foreground mb-2">
No projects yet
</h3>
<p className="text-sm text-muted-foreground mb-4">
Create your first project to get started
</p>
<Button onClick={onNewProject} variant="default">
<Plus className="h-4 w-4 mr-2" />
Create Project
</Button>
</div>
) : (
<div className="grid gap-4">
{projects.map((project) => (
<div
key={project.id}
className="border border-border rounded-lg p-4 hover:bg-accent/50 transition-colors"
>
<div className="flex items-start justify-between">
<div className="flex-1">
<h3 className="font-medium text-foreground mb-1">
{project.name}
</h3>
{project.description && (
<p className="text-sm text-muted-foreground mb-2">
{project.description}
</p>
)}
<div className="flex flex-wrap gap-4 text-xs text-muted-foreground">
<span>{project.trackCount} tracks</span>
<span>{formatDuration(project.duration)}</span>
<span>{project.sampleRate / 1000}kHz</span>
<span>Updated {formatDate(project.updatedAt)}</span>
</div>
</div>
<div className="flex items-center gap-2 ml-4">
<button
onClick={() => onLoadProject(project.id)}
className="px-3 py-1.5 text-sm font-medium text-primary hover:bg-primary/10 rounded transition-colors"
title="Open project"
>
Open
</button>
<button
onClick={() => onExportProject(project.id)}
className="p-1.5 text-muted-foreground hover:text-foreground hover:bg-accent rounded transition-colors"
title="Export project"
>
<Download className="h-4 w-4" />
</button>
<button
onClick={() => onDuplicateProject(project.id)}
className="p-1.5 text-muted-foreground hover:text-foreground hover:bg-accent rounded transition-colors"
title="Duplicate project"
>
<Copy className="h-4 w-4" />
</button>
<button
onClick={() => {
if (confirm(`Delete "${project.name}"? This cannot be undone.`)) {
onDeleteProject(project.id);
}
}}
className="p-1.5 text-muted-foreground hover:text-destructive hover:bg-destructive/10 rounded transition-colors"
title="Delete project"
>
<Trash2 className="h-4 w-4" />
</button>
</div>
</div>
</div>
))}
</div>
)}
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,106 @@
'use client';
import * as React from 'react';
import { AlertCircle, FileQuestion, X } from 'lucide-react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
export interface UnsupportedFormatDialogProps {
open: boolean;
fileName: string;
fileType: string;
onClose: () => void;
}
const SUPPORTED_FORMATS = [
{ extension: 'WAV', mimeType: 'audio/wav', description: 'Lossless, widely supported' },
{ extension: 'MP3', mimeType: 'audio/mpeg', description: 'Compressed, universal support' },
{ extension: 'OGG', mimeType: 'audio/ogg', description: 'Free, open format' },
{ extension: 'FLAC', mimeType: 'audio/flac', description: 'Lossless compression' },
{ extension: 'M4A/AAC', mimeType: 'audio/aac', description: 'Apple audio format' },
{ extension: 'AIFF', mimeType: 'audio/aiff', description: 'Apple lossless format' },
{ extension: 'WebM', mimeType: 'audio/webm', description: 'Modern web format' },
];
export function UnsupportedFormatDialog({
open,
fileName,
fileType,
onClose,
}: UnsupportedFormatDialogProps) {
if (!open) return null;
return (
<Modal open={open} onClose={onClose} title="">
<div className="p-6 max-w-lg">
{/* Header */}
<div className="flex items-start justify-between mb-4">
<div className="flex items-center gap-3">
<FileQuestion className="h-6 w-6 text-yellow-500" />
<h2 className="text-lg font-semibold">Unsupported File Format</h2>
</div>
<button onClick={onClose} className="text-muted-foreground hover:text-foreground">
<X className="h-4 w-4" />
</button>
</div>
{/* Error Message */}
<div className="bg-yellow-500/10 border border-yellow-500/20 rounded-md p-4 mb-4">
<div className="flex items-start gap-2">
<AlertCircle className="h-4 w-4 text-yellow-600 dark:text-yellow-400 mt-0.5 flex-shrink-0" />
<div className="flex-1">
<p className="text-sm text-yellow-800 dark:text-yellow-200 font-medium mb-1">
Cannot open this file
</p>
<p className="text-sm text-yellow-700 dark:text-yellow-300">
<strong>{fileName}</strong>
{fileType && (
<span className="text-muted-foreground"> ({fileType})</span>
)}
</p>
</div>
</div>
</div>
{/* Supported Formats */}
<div className="mb-6">
<h3 className="text-sm font-semibold mb-3">Supported Audio Formats:</h3>
<div className="grid grid-cols-1 gap-2">
{SUPPORTED_FORMATS.map((format) => (
<div
key={format.extension}
className="flex items-center justify-between gap-4 p-2 rounded bg-muted/30 border border-border/50"
>
<div className="flex items-center gap-3">
<span className="text-sm font-mono font-semibold text-primary min-w-[80px]">
{format.extension}
</span>
<span className="text-xs text-muted-foreground">
{format.description}
</span>
</div>
</div>
))}
</div>
</div>
{/* Recommendations */}
<div className="bg-muted/50 border border-border rounded-md p-4 mb-4">
<h4 className="text-sm font-semibold mb-2">How to fix this:</h4>
<ul className="text-sm text-muted-foreground space-y-2 list-disc list-inside">
<li>Convert your audio file to a supported format (WAV or MP3 recommended)</li>
<li>Use a free audio converter like Audacity, FFmpeg, or online converters</li>
<li>Check that the file isn't corrupted or incomplete</li>
</ul>
</div>
{/* Close Button */}
<div className="flex justify-end">
<Button onClick={onClose} variant="default">
Got it
</Button>
</div>
</div>
</Modal>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,139 @@
'use client';
import * as React from 'react';
import { Scissors, Copy, Clipboard, Trash2, CropIcon, Info } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { Selection } from '@/types/selection';
import { formatDuration } from '@/lib/audio/decoder';
export interface EditControlsProps {
selection: Selection | null;
hasClipboard: boolean;
onCut: () => void;
onCopy: () => void;
onPaste: () => void;
onDelete: () => void;
onTrim: () => void;
onClearSelection: () => void;
className?: string;
}
export function EditControls({
selection,
hasClipboard,
onCut,
onCopy,
onPaste,
onDelete,
onTrim,
onClearSelection,
className,
}: EditControlsProps) {
const hasSelection = selection !== null;
const selectionDuration = selection ? selection.end - selection.start : 0;
return (
<div className={cn('space-y-4', className)}>
{/* Selection Info */}
{hasSelection && (
<div className="rounded-lg border-2 border-info bg-info/90 p-3">
<div className="flex items-start gap-2">
<Info className="h-4 w-4 text-white dark:text-white mt-0.5 flex-shrink-0" />
<div className="flex-1 min-w-0 text-sm text-white dark:text-white">
<p className="font-medium">Selection Active</p>
<p className="opacity-95 mt-1">
Duration: {formatDuration(selectionDuration)} |
Start: {formatDuration(selection.start)} |
End: {formatDuration(selection.end)}
</p>
<p className="text-xs opacity-90 mt-1">
Tip: Hold Shift and drag on the waveform to select a region
</p>
</div>
</div>
</div>
)}
{/* Edit Buttons */}
<div className="grid grid-cols-2 gap-2">
<Button
variant="outline"
onClick={onCut}
disabled={!hasSelection}
title="Cut (Ctrl+X)"
className="justify-start"
>
<Scissors className="h-4 w-4 mr-2" />
Cut
</Button>
<Button
variant="outline"
onClick={onCopy}
disabled={!hasSelection}
title="Copy (Ctrl+C)"
className="justify-start"
>
<Copy className="h-4 w-4 mr-2" />
Copy
</Button>
<Button
variant="outline"
onClick={onPaste}
disabled={!hasClipboard}
title="Paste (Ctrl+V)"
className="justify-start"
>
<Clipboard className="h-4 w-4 mr-2" />
Paste
</Button>
<Button
variant="outline"
onClick={onDelete}
disabled={!hasSelection}
title="Delete (Del)"
className="justify-start"
>
<Trash2 className="h-4 w-4 mr-2" />
Delete
</Button>
<Button
variant="outline"
onClick={onTrim}
disabled={!hasSelection}
title="Trim to Selection"
className="justify-start"
>
<CropIcon className="h-4 w-4 mr-2" />
Trim
</Button>
<Button
variant="outline"
onClick={onClearSelection}
disabled={!hasSelection}
title="Clear Selection (Esc)"
className="justify-start"
>
Clear
</Button>
</div>
{/* Keyboard Shortcuts Info */}
<div className="text-xs text-muted-foreground space-y-1 p-3 rounded-lg bg-muted/30">
<p className="font-medium mb-2">Edit Shortcuts:</p>
<p> Shift+Drag: Select region</p>
<p> Ctrl+A: Select all</p>
<p> Ctrl+X: Cut</p>
<p> Ctrl+C: Copy</p>
<p> Ctrl+V: Paste</p>
<p> Delete: Delete selection</p>
<p> Escape: Clear selection</p>
</div>
</div>
);
}

View File

@@ -84,7 +84,7 @@ export function FileUpload({ onFileSelect, className }: FileUploadProps) {
Click to browse or drag and drop Click to browse or drag and drop
</p> </p>
<p className="text-xs text-muted-foreground mt-2"> <p className="text-xs text-muted-foreground mt-2">
Supported formats: WAV, MP3, OGG, FLAC, AAC, M4A Supported formats: WAV, MP3, OGG, FLAC, AAC, M4A, AIFF
</p> </p>
</div> </div>
</div> </div>

View File

@@ -0,0 +1,107 @@
'use client';
import * as React from 'react';
import { Undo2, Redo2, History, Info } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { HistoryState } from '@/lib/history/history-manager';
export interface HistoryControlsProps {
historyState: HistoryState;
onUndo: () => void;
onRedo: () => void;
onClear?: () => void;
className?: string;
}
export function HistoryControls({
historyState,
onUndo,
onRedo,
onClear,
className,
}: HistoryControlsProps) {
return (
<div className={cn('space-y-4', className)}>
{/* History Info */}
{historyState.historySize > 0 && (
<div className="rounded-lg border-2 border-info bg-info/90 p-3">
<div className="flex items-start gap-2">
<Info className="h-4 w-4 text-white dark:text-white mt-0.5 flex-shrink-0" />
<div className="flex-1 min-w-0 text-sm text-white dark:text-white">
<p className="font-medium">History Available</p>
<p className="opacity-95 mt-1">
{historyState.historySize} action{historyState.historySize !== 1 ? 's' : ''} in history
</p>
{historyState.undoDescription && (
<p className="text-xs opacity-90 mt-1">
Next undo: {historyState.undoDescription}
</p>
)}
{historyState.redoDescription && (
<p className="text-xs opacity-90 mt-1">
Next redo: {historyState.redoDescription}
</p>
)}
</div>
</div>
</div>
)}
{/* Control Buttons */}
<div className="grid grid-cols-2 gap-2">
<Button
variant="outline"
onClick={onUndo}
disabled={!historyState.canUndo}
title={
historyState.canUndo
? `Undo ${historyState.undoDescription} (Ctrl+Z)`
: 'Nothing to undo (Ctrl+Z)'
}
className="justify-start"
>
<Undo2 className="h-4 w-4 mr-2" />
Undo
</Button>
<Button
variant="outline"
onClick={onRedo}
disabled={!historyState.canRedo}
title={
historyState.canRedo
? `Redo ${historyState.redoDescription} (Ctrl+Y)`
: 'Nothing to redo (Ctrl+Y)'
}
className="justify-start"
>
<Redo2 className="h-4 w-4 mr-2" />
Redo
</Button>
{onClear && historyState.historySize > 0 && (
<Button
variant="outline"
onClick={onClear}
title="Clear history"
className="justify-start col-span-2"
>
<History className="h-4 w-4 mr-2" />
Clear History ({historyState.historySize})
</Button>
)}
</div>
{/* Keyboard Shortcuts Info */}
<div className="text-xs text-muted-foreground space-y-1 p-3 rounded-lg bg-muted/30">
<p className="font-medium mb-2">History Shortcuts:</p>
<p> Ctrl+Z: Undo last action</p>
<p> Ctrl+Y or Ctrl+Shift+Z: Redo last action</p>
<p className="mt-2 text-xs text-muted-foreground/75">
History tracks up to 50 edit operations
</p>
</div>
</div>
);
}

View File

@@ -1,9 +1,8 @@
'use client'; 'use client';
import * as React from 'react'; import * as React from 'react';
import { Play, Pause, Square, SkipBack, Volume2, VolumeX } from 'lucide-react'; import { Play, Pause, Square, SkipBack, Circle, AlignVerticalJustifyStart, AlignVerticalJustifyEnd, Layers, Repeat } from 'lucide-react';
import { Button } from '@/components/ui/Button'; import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import { cn } from '@/lib/utils/cn'; import { cn } from '@/lib/utils/cn';
export interface PlaybackControlsProps { export interface PlaybackControlsProps {
@@ -15,12 +14,30 @@ export interface PlaybackControlsProps {
onPlay: () => void; onPlay: () => void;
onPause: () => void; onPause: () => void;
onStop: () => void; onStop: () => void;
onSeek: (time: number) => void; onSeek: (time: number, autoPlay?: boolean) => void;
onVolumeChange: (volume: number) => void; onVolumeChange: (volume: number) => void;
disabled?: boolean; disabled?: boolean;
className?: string; className?: string;
currentTimeFormatted?: string; currentTimeFormatted?: string;
durationFormatted?: string; durationFormatted?: string;
isRecording?: boolean;
onStartRecording?: () => void;
onStopRecording?: () => void;
punchInEnabled?: boolean;
punchInTime?: number;
punchOutTime?: number;
onPunchInEnabledChange?: (enabled: boolean) => void;
onPunchInTimeChange?: (time: number) => void;
onPunchOutTimeChange?: (time: number) => void;
overdubEnabled?: boolean;
onOverdubEnabledChange?: (enabled: boolean) => void;
loopEnabled?: boolean;
loopStart?: number;
loopEnd?: number;
onToggleLoop?: () => void;
onSetLoopPoints?: (start: number, end: number) => void;
playbackRate?: number;
onPlaybackRateChange?: (rate: number) => void;
} }
export function PlaybackControls({ export function PlaybackControls({
@@ -38,10 +55,25 @@ export function PlaybackControls({
className, className,
currentTimeFormatted, currentTimeFormatted,
durationFormatted, durationFormatted,
isRecording = false,
onStartRecording,
onStopRecording,
punchInEnabled = false,
punchInTime = 0,
punchOutTime = 0,
onPunchInEnabledChange,
onPunchInTimeChange,
onPunchOutTimeChange,
overdubEnabled = false,
onOverdubEnabledChange,
loopEnabled = false,
loopStart = 0,
loopEnd = 0,
onToggleLoop,
onSetLoopPoints,
playbackRate = 1.0,
onPlaybackRateChange,
}: PlaybackControlsProps) { }: PlaybackControlsProps) {
const [isMuted, setIsMuted] = React.useState(false);
const [previousVolume, setPreviousVolume] = React.useState(volume);
const handlePlayPause = () => { const handlePlayPause = () => {
if (isPlaying) { if (isPlaying) {
onPause(); onPause();
@@ -50,30 +82,10 @@ export function PlaybackControls({
} }
}; };
const handleMuteToggle = () => {
if (isMuted) {
onVolumeChange(previousVolume);
setIsMuted(false);
} else {
setPreviousVolume(volume);
onVolumeChange(0);
setIsMuted(true);
}
};
const handleVolumeChange = (newVolume: number) => {
onVolumeChange(newVolume);
if (newVolume === 0) {
setIsMuted(true);
} else {
setIsMuted(false);
}
};
const progress = duration > 0 ? (currentTime / duration) * 100 : 0; const progress = duration > 0 ? (currentTime / duration) * 100 : 0;
return ( return (
<div className={cn('space-y-4', className)}> <div className={cn('space-y-4 w-full max-w-2xl', className)}>
{/* Timeline Slider */} {/* Timeline Slider */}
<div className="space-y-2"> <div className="space-y-2">
<input <input
@@ -82,7 +94,9 @@ export function PlaybackControls({
max={duration || 100} max={duration || 100}
step={0.01} step={0.01}
value={currentTime} value={currentTime}
onChange={(e) => onSeek(parseFloat(e.target.value))} onChange={(e) => onSeek(parseFloat(e.target.value), false)}
onMouseUp={(e) => onSeek(parseFloat((e.target as HTMLInputElement).value), true)}
onTouchEnd={(e) => onSeek(parseFloat((e.target as HTMLInputElement).value), true)}
disabled={disabled || duration === 0} disabled={disabled || duration === 0}
className={cn( className={cn(
'w-full h-2 bg-secondary rounded-lg appearance-none cursor-pointer', 'w-full h-2 bg-secondary rounded-lg appearance-none cursor-pointer',
@@ -105,8 +119,63 @@ export function PlaybackControls({
</div> </div>
</div> </div>
{/* Punch In/Out Times - Show when enabled */}
{punchInEnabled && onPunchInTimeChange && onPunchOutTimeChange && (
<div className="flex items-center gap-3 text-xs bg-muted/50 rounded px-3 py-2">
<div className="flex items-center gap-2 flex-1">
<label className="text-muted-foreground flex items-center gap-1 flex-shrink-0">
<AlignVerticalJustifyStart className="h-3 w-3" />
Punch In
</label>
<input
type="number"
min={0}
max={punchOutTime || duration}
step={0.1}
value={punchInTime.toFixed(2)}
onChange={(e) => onPunchInTimeChange(parseFloat(e.target.value))}
className="flex-1 px-2 py-1 bg-background border border-border rounded text-xs font-mono"
/>
<Button
variant="ghost"
size="sm"
onClick={() => onPunchInTimeChange(currentTime)}
title="Set punch-in to current time"
className="h-6 px-2 text-xs"
>
Set
</Button>
</div>
<div className="flex items-center gap-2 flex-1">
<label className="text-muted-foreground flex items-center gap-1 flex-shrink-0">
<AlignVerticalJustifyEnd className="h-3 w-3" />
Punch Out
</label>
<input
type="number"
min={punchInTime}
max={duration}
step={0.1}
value={punchOutTime.toFixed(2)}
onChange={(e) => onPunchOutTimeChange(parseFloat(e.target.value))}
className="flex-1 px-2 py-1 bg-background border border-border rounded text-xs font-mono"
/>
<Button
variant="ghost"
size="sm"
onClick={() => onPunchOutTimeChange(currentTime)}
title="Set punch-out to current time"
className="h-6 px-2 text-xs"
>
Set
</Button>
</div>
</div>
)}
{/* Transport Controls */} {/* Transport Controls */}
<div className="flex items-center justify-between gap-4"> <div className="flex items-center justify-center gap-4">
<div className="flex items-center gap-2"> <div className="flex items-center gap-2">
<Button <Button
variant="outline" variant="outline"
@@ -142,33 +211,152 @@ export function PlaybackControls({
> >
<Square className="h-4 w-4" /> <Square className="h-4 w-4" />
</Button> </Button>
</div>
{/* Volume Control */} {/* Record Button */}
<div className="flex items-center gap-3 min-w-[200px]"> {(onStartRecording || onStopRecording) && (
<Button <>
variant="ghost" <Button
size="icon" variant="outline"
onClick={handleMuteToggle} size="icon"
title={isMuted ? 'Unmute' : 'Mute'} onClick={isRecording ? onStopRecording : onStartRecording}
> disabled={disabled}
{isMuted || volume === 0 ? ( title={isRecording ? 'Stop Recording' : 'Start Recording'}
<VolumeX className="h-5 w-5" /> className={cn(
) : ( isRecording && 'bg-red-500/20 hover:bg-red-500/30 border-red-500/50',
<Volume2 className="h-5 w-5" /> isRecording && 'animate-pulse'
)} )}
</Button> >
<Circle className={cn('h-4 w-4', isRecording && 'text-red-500 fill-red-500')} />
</Button>
<Slider {/* Recording Options */}
value={volume} <div className="flex items-center gap-1 border-l border-border pl-2 ml-1">
onChange={handleVolumeChange} {/* Punch In/Out Toggle */}
min={0} {onPunchInEnabledChange && (
max={1} <Button
step={0.01} variant="ghost"
className="flex-1" size="icon-sm"
/> onClick={() => onPunchInEnabledChange(!punchInEnabled)}
title="Toggle Punch In/Out Recording"
className={cn(
punchInEnabled && 'bg-primary/20 hover:bg-primary/30'
)}
>
<AlignVerticalJustifyStart className={cn('h-3.5 w-3.5', punchInEnabled && 'text-primary')} />
</Button>
)}
{/* Overdub Mode Toggle */}
{onOverdubEnabledChange && (
<Button
variant="ghost"
size="icon-sm"
onClick={() => onOverdubEnabledChange(!overdubEnabled)}
title="Toggle Overdub Mode (layer recordings)"
className={cn(
overdubEnabled && 'bg-primary/20 hover:bg-primary/30'
)}
>
<Layers className={cn('h-3.5 w-3.5', overdubEnabled && 'text-primary')} />
</Button>
)}
</div>
</>
)}
{/* Loop Toggle */}
{onToggleLoop && (
<div className="flex items-center gap-1 border-l border-border pl-2 ml-1">
<Button
variant="ghost"
size="icon-sm"
onClick={onToggleLoop}
title="Toggle Loop Playback"
className={cn(
loopEnabled && 'bg-primary/20 hover:bg-primary/30'
)}
>
<Repeat className={cn('h-3.5 w-3.5', loopEnabled && 'text-primary')} />
</Button>
</div>
)}
{/* Playback Speed Control */}
{onPlaybackRateChange && (
<div className="flex items-center gap-1 border-l border-border pl-2 ml-1">
<select
value={playbackRate}
onChange={(e) => onPlaybackRateChange(parseFloat(e.target.value))}
className="h-7 px-2 py-0 bg-background border border-border rounded text-xs cursor-pointer hover:bg-muted/50 focus:outline-none focus:ring-2 focus:ring-ring"
title="Playback Speed"
>
<option value={0.25}>0.25x</option>
<option value={0.5}>0.5x</option>
<option value={0.75}>0.75x</option>
<option value={1.0}>1x</option>
<option value={1.25}>1.25x</option>
<option value={1.5}>1.5x</option>
<option value={2.0}>2x</option>
</select>
</div>
)}
</div> </div>
</div> </div>
{/* Loop Points - Show when enabled */}
{loopEnabled && onSetLoopPoints && (
<div className="flex items-center gap-3 text-xs bg-muted/50 rounded px-3 py-2">
<div className="flex items-center gap-2 flex-1">
<label className="text-muted-foreground flex items-center gap-1 flex-shrink-0">
<AlignVerticalJustifyStart className="h-3 w-3" />
Loop Start
</label>
<input
type="number"
min={0}
max={loopEnd || duration}
step={0.1}
value={loopStart.toFixed(2)}
onChange={(e) => onSetLoopPoints(parseFloat(e.target.value), loopEnd)}
className="flex-1 px-2 py-1 bg-background border border-border rounded text-xs font-mono"
/>
<Button
variant="ghost"
size="sm"
onClick={() => onSetLoopPoints(currentTime, loopEnd)}
title="Set loop start to current time"
className="h-6 px-2 text-xs"
>
Set
</Button>
</div>
<div className="flex items-center gap-2 flex-1">
<label className="text-muted-foreground flex items-center gap-1 flex-shrink-0">
<AlignVerticalJustifyEnd className="h-3 w-3" />
Loop End
</label>
<input
type="number"
min={loopStart}
max={duration}
step={0.1}
value={loopEnd.toFixed(2)}
onChange={(e) => onSetLoopPoints(loopStart, parseFloat(e.target.value))}
className="flex-1 px-2 py-1 bg-background border border-border rounded text-xs font-mono"
/>
<Button
variant="ghost"
size="sm"
onClick={() => onSetLoopPoints(loopStart, currentTime)}
title="Set loop end to current time"
className="h-6 px-2 text-xs"
>
Set
</Button>
</div>
</div>
)}
</div> </div>
); );
} }

View File

@@ -2,18 +2,21 @@
import * as React from 'react'; import * as React from 'react';
import { cn } from '@/lib/utils/cn'; import { cn } from '@/lib/utils/cn';
import { generateMinMaxPeaks } from '@/lib/waveform/peaks'; import { useAudioWorker } from '@/lib/hooks/useAudioWorker';
import type { Selection } from '@/types/selection';
export interface WaveformProps { export interface WaveformProps {
audioBuffer: AudioBuffer | null; audioBuffer: AudioBuffer | null;
currentTime: number; currentTime: number;
duration: number; duration: number;
onSeek?: (time: number) => void; onSeek?: (time: number, autoPlay?: boolean) => void;
className?: string; className?: string;
height?: number; height?: number;
zoom?: number; zoom?: number;
scrollOffset?: number; scrollOffset?: number;
amplitudeScale?: number; amplitudeScale?: number;
selection?: Selection | null;
onSelectionChange?: (selection: Selection | null) => void;
} }
export function Waveform({ export function Waveform({
@@ -26,11 +29,25 @@ export function Waveform({
zoom = 1, zoom = 1,
scrollOffset = 0, scrollOffset = 0,
amplitudeScale = 1, amplitudeScale = 1,
selection = null,
onSelectionChange,
}: WaveformProps) { }: WaveformProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null); const canvasRef = React.useRef<HTMLCanvasElement>(null);
const containerRef = React.useRef<HTMLDivElement>(null); const containerRef = React.useRef<HTMLDivElement>(null);
const [width, setWidth] = React.useState(800); const [width, setWidth] = React.useState(800);
const [isDragging, setIsDragging] = React.useState(false); const [isDragging, setIsDragging] = React.useState(false);
const [isSelecting, setIsSelecting] = React.useState(false);
const [selectionStart, setSelectionStart] = React.useState<number | null>(null);
// Worker for peak generation
const worker = useAudioWorker();
// Cache peaks to avoid regenerating on every render
const [peaksCache, setPeaksCache] = React.useState<{
width: number;
min: Float32Array;
max: Float32Array;
} | null>(null);
// Handle resize // Handle resize
React.useEffect(() => { React.useEffect(() => {
@@ -45,10 +62,35 @@ export function Waveform({
return () => window.removeEventListener('resize', handleResize); return () => window.removeEventListener('resize', handleResize);
}, []); }, []);
// Generate peaks in worker when audioBuffer or zoom changes
React.useEffect(() => {
if (!audioBuffer) {
setPeaksCache(null);
return;
}
const visibleWidth = Math.floor(width * zoom);
// Check if we already have peaks for this width
if (peaksCache && peaksCache.width === visibleWidth) {
return;
}
// Generate peaks in worker
const channelData = audioBuffer.getChannelData(0);
worker.generateMinMaxPeaks(channelData, visibleWidth).then((peaks) => {
setPeaksCache({
width: visibleWidth,
min: peaks.min,
max: peaks.max,
});
});
}, [audioBuffer, width, zoom, worker, peaksCache]);
// Draw waveform // Draw waveform
React.useEffect(() => { React.useEffect(() => {
const canvas = canvasRef.current; const canvas = canvasRef.current;
if (!canvas || !audioBuffer) return; if (!canvas || !audioBuffer || !peaksCache) return;
const ctx = canvas.getContext('2d'); const ctx = canvas.getContext('2d');
if (!ctx) return; if (!ctx) return;
@@ -68,8 +110,8 @@ export function Waveform({
// Calculate visible width based on zoom // Calculate visible width based on zoom
const visibleWidth = Math.floor(width * zoom); const visibleWidth = Math.floor(width * zoom);
// Generate peaks for visible portion // Use cached peaks
const { min, max } = generateMinMaxPeaks(audioBuffer, visibleWidth, 0); const { min, max } = peaksCache;
// Draw waveform // Draw waveform
const middle = height / 2; const middle = height / 2;
@@ -128,6 +170,38 @@ export function Waveform({
ctx.lineTo(width, middle); ctx.lineTo(width, middle);
ctx.stroke(); ctx.stroke();
// Draw selection
if (selection) {
const selectionStartX = ((selection.start / duration) * visibleWidth) - scrollOffset;
const selectionEndX = ((selection.end / duration) * visibleWidth) - scrollOffset;
if (selectionEndX >= 0 && selectionStartX <= width) {
const clampedStart = Math.max(0, selectionStartX);
const clampedEnd = Math.min(width, selectionEndX);
ctx.fillStyle = 'rgba(59, 130, 246, 0.3)';
ctx.fillRect(clampedStart, 0, clampedEnd - clampedStart, height);
// Selection borders
ctx.strokeStyle = '#3b82f6';
ctx.lineWidth = 2;
if (selectionStartX >= 0 && selectionStartX <= width) {
ctx.beginPath();
ctx.moveTo(selectionStartX, 0);
ctx.lineTo(selectionStartX, height);
ctx.stroke();
}
if (selectionEndX >= 0 && selectionEndX <= width) {
ctx.beginPath();
ctx.moveTo(selectionEndX, 0);
ctx.lineTo(selectionEndX, height);
ctx.stroke();
}
}
}
// Draw playhead // Draw playhead
if (progressX >= 0 && progressX <= width) { if (progressX >= 0 && progressX <= width) {
ctx.strokeStyle = '#ef4444'; ctx.strokeStyle = '#ef4444';
@@ -137,7 +211,7 @@ export function Waveform({
ctx.lineTo(progressX, height); ctx.lineTo(progressX, height);
ctx.stroke(); ctx.stroke();
} }
}, [audioBuffer, width, height, currentTime, duration, zoom, scrollOffset, amplitudeScale]); }, [audioBuffer, width, height, currentTime, duration, zoom, scrollOffset, amplitudeScale, selection, peaksCache]);
const handleClick = (e: React.MouseEvent<HTMLCanvasElement>) => { const handleClick = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!onSeek || !duration || isDragging) return; if (!onSeek || !duration || isDragging) return;
@@ -157,34 +231,77 @@ export function Waveform({
}; };
const handleMouseDown = (e: React.MouseEvent<HTMLCanvasElement>) => { const handleMouseDown = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!onSeek || !duration) return; if (!duration) return;
setIsDragging(true);
handleClick(e);
};
const handleMouseMove = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!isDragging || !onSeek || !duration) return;
const canvas = canvasRef.current; const canvas = canvasRef.current;
if (!canvas) return; if (!canvas) return;
const rect = canvas.getBoundingClientRect(); const rect = canvas.getBoundingClientRect();
const x = e.clientX - rect.left; const x = e.clientX - rect.left;
// Account for zoom and scroll
const visibleWidth = width * zoom; const visibleWidth = width * zoom;
const actualX = x + scrollOffset; const actualX = x + scrollOffset;
const clickedTime = (actualX / visibleWidth) * duration; const clickedTime = (actualX / visibleWidth) * duration;
onSeek(Math.max(0, Math.min(duration, clickedTime))); // Start selection on drag
setIsSelecting(true);
setSelectionStart(clickedTime);
if (onSelectionChange) {
onSelectionChange({ start: clickedTime, end: clickedTime });
}
}; };
const handleMouseUp = () => { const handleMouseMove = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!duration) return;
const canvas = canvasRef.current;
if (!canvas) return;
const rect = canvas.getBoundingClientRect();
const x = e.clientX - rect.left;
const visibleWidth = width * zoom;
const actualX = x + scrollOffset;
const currentTime = (actualX / visibleWidth) * duration;
const clampedTime = Math.max(0, Math.min(duration, currentTime));
// Handle selection dragging
if (isSelecting && onSelectionChange && selectionStart !== null) {
setIsDragging(true); // Mark that we're dragging
const start = Math.min(selectionStart, clampedTime);
const end = Math.max(selectionStart, clampedTime);
onSelectionChange({ start, end });
}
};
const handleMouseUp = (e: React.MouseEvent<HTMLCanvasElement>) => {
// If we didn't drag (just clicked), seek to that position and clear selection
if (!isDragging && onSeek) {
const canvas = canvasRef.current;
if (canvas) {
const rect = canvas.getBoundingClientRect();
const x = e.clientX - rect.left;
const visibleWidth = width * zoom;
const actualX = x + scrollOffset;
const clickTime = (actualX / visibleWidth) * duration;
const clampedTime = Math.max(0, Math.min(duration, clickTime));
// Seek and auto-play
onSeek(clampedTime, true);
// Clear selection on click
if (onSelectionChange) {
onSelectionChange(null);
}
}
}
// If we dragged, the selection is already set via handleMouseMove
setIsDragging(false); setIsDragging(false);
setIsSelecting(false);
setSelectionStart(null);
}; };
const handleMouseLeave = () => { const handleMouseLeave = () => {
setIsDragging(false); setIsDragging(false);
setIsSelecting(false);
setSelectionStart(null);
}; };
return ( return (
@@ -198,7 +315,7 @@ export function Waveform({
onMouseLeave={handleMouseLeave} onMouseLeave={handleMouseLeave}
className={cn( className={cn(
'w-full rounded-lg border border-border', 'w-full rounded-lg border border-border',
isDragging ? 'cursor-grabbing' : 'cursor-pointer' isDragging ? 'cursor-grabbing' : isSelecting ? 'cursor-text' : 'cursor-pointer'
)} )}
style={{ height: `${height}px` }} style={{ height: `${height}px` }}
/> />

View File

@@ -0,0 +1,447 @@
'use client';
import * as React from 'react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import type {
PitchShifterParameters,
TimeStretchParameters,
DistortionParameters,
BitcrusherParameters,
} from '@/lib/audio/effects/advanced';
export type AdvancedType = 'pitch' | 'timestretch' | 'distortion' | 'bitcrusher';
export type AdvancedParameters =
| (PitchShifterParameters & { type: 'pitch' })
| (BitcrusherParameters & { type: 'bitcrusher' })
| (TimeStretchParameters & { type: 'timestretch' })
| (DistortionParameters & { type: 'distortion' });
interface EffectPreset<T = any> {
name: string;
parameters: T;
}
const PRESETS: Record<AdvancedType, EffectPreset[]> = {
pitch: [
{ name: 'Octave Up', parameters: { semitones: 12, cents: 0, mix: 1.0 } },
{ name: 'Fifth Up', parameters: { semitones: 7, cents: 0, mix: 1.0 } },
{ name: 'Octave Down', parameters: { semitones: -12, cents: 0, mix: 1.0 } },
{ name: 'Subtle Shift', parameters: { semitones: 2, cents: 0, mix: 0.5 } },
],
timestretch: [
{ name: 'Half Speed', parameters: { rate: 0.5, preservePitch: true, mix: 1.0 } },
{ name: 'Double Speed', parameters: { rate: 2.0, preservePitch: true, mix: 1.0 } },
{ name: 'Slow Motion', parameters: { rate: 0.75, preservePitch: true, mix: 1.0 } },
{ name: 'Fast Forward', parameters: { rate: 1.5, preservePitch: true, mix: 1.0 } },
],
distortion: [
{ name: 'Light Overdrive', parameters: { drive: 0.3, tone: 0.7, output: 0.8, type: 'soft' as const, mix: 1.0 } },
{ name: 'Heavy Distortion', parameters: { drive: 0.8, tone: 0.5, output: 0.6, type: 'hard' as const, mix: 1.0 } },
{ name: 'Tube Warmth', parameters: { drive: 0.4, tone: 0.6, output: 0.75, type: 'tube' as const, mix: 0.8 } },
{ name: 'Extreme Fuzz', parameters: { drive: 1.0, tone: 0.3, output: 0.5, type: 'hard' as const, mix: 1.0 } },
],
bitcrusher: [
{ name: 'Lo-Fi', parameters: { bitDepth: 8, sampleRate: 8000, mix: 1.0 } },
{ name: 'Telephone', parameters: { bitDepth: 4, sampleRate: 4000, mix: 1.0 } },
{ name: 'Subtle Crunch', parameters: { bitDepth: 12, sampleRate: 22050, mix: 0.6 } },
{ name: 'Extreme Crush', parameters: { bitDepth: 2, sampleRate: 2000, mix: 1.0 } },
],
};
const DEFAULT_PARAMS: Record<AdvancedType, any> = {
pitch: { semitones: 0, cents: 0, mix: 1.0 },
timestretch: { rate: 1.0, preservePitch: true, mix: 1.0 },
distortion: { drive: 0.5, tone: 0.5, output: 0.7, type: 'soft', mix: 1.0 },
bitcrusher: { bitDepth: 8, sampleRate: 8000, mix: 1.0 },
};
const EFFECT_LABELS: Record<AdvancedType, string> = {
pitch: 'Pitch Shifter',
timestretch: 'Time Stretch',
distortion: 'Distortion',
bitcrusher: 'Bitcrusher',
};
export interface AdvancedParameterDialogProps {
open: boolean;
onClose: () => void;
effectType: AdvancedType;
onApply: (params: AdvancedParameters) => void;
}
export function AdvancedParameterDialog({
open,
onClose,
effectType,
onApply,
}: AdvancedParameterDialogProps) {
const [parameters, setParameters] = React.useState<any>(
DEFAULT_PARAMS[effectType]
);
const canvasRef = React.useRef<HTMLCanvasElement>(null);
// Reset parameters when effect type changes
React.useEffect(() => {
setParameters(DEFAULT_PARAMS[effectType]);
}, [effectType]);
// Draw visual feedback
React.useEffect(() => {
if (!open) return;
const canvas = canvasRef.current;
if (!canvas) return;
const ctx = canvas.getContext('2d');
if (!ctx) return;
const width = canvas.width;
const height = canvas.height;
// Clear canvas
ctx.clearRect(0, 0, width, height);
// Draw background
ctx.fillStyle = 'rgb(15, 23, 42)';
ctx.fillRect(0, 0, width, height);
// Draw visualization based on effect type
ctx.strokeStyle = 'rgb(59, 130, 246)';
ctx.lineWidth = 2;
if (effectType === 'pitch') {
const pitchParams = parameters as PitchShifterParameters;
const totalCents = (pitchParams.semitones ?? 0) * 100 + (pitchParams.cents ?? 0);
const pitchRatio = Math.pow(2, totalCents / 1200);
// Draw waveform with pitch shift
ctx.beginPath();
for (let x = 0; x < width; x++) {
const t = (x / width) * 4 * Math.PI * pitchRatio;
const y = height / 2 + Math.sin(t) * (height / 3);
if (x === 0) ctx.moveTo(x, y);
else ctx.lineTo(x, y);
}
ctx.stroke();
// Draw reference waveform
ctx.strokeStyle = 'rgba(148, 163, 184, 0.3)';
ctx.beginPath();
for (let x = 0; x < width; x++) {
const t = (x / width) * 4 * Math.PI;
const y = height / 2 + Math.sin(t) * (height / 3);
if (x === 0) ctx.moveTo(x, y);
else ctx.lineTo(x, y);
}
ctx.stroke();
} else if (effectType === 'timestretch') {
const stretchParams = parameters as TimeStretchParameters;
// Draw time-stretched waveform
ctx.beginPath();
for (let x = 0; x < width; x++) {
const t = (x / width) * 4 * Math.PI / (stretchParams.rate ?? 1.0);
const y = height / 2 + Math.sin(t) * (height / 3);
if (x === 0) ctx.moveTo(x, y);
else ctx.lineTo(x, y);
}
ctx.stroke();
} else if (effectType === 'distortion') {
const distParams = parameters as DistortionParameters;
// Draw distorted waveform
ctx.beginPath();
for (let x = 0; x < width; x++) {
const t = (x / width) * 4 * Math.PI;
let sample = Math.sin(t);
// Apply distortion
const drive = 1 + (distParams.drive ?? 0.5) * 10;
sample *= drive;
const distType = distParams.type ?? 'soft';
if (distType === 'soft') {
sample = Math.tanh(sample);
} else if (distType === 'hard') {
sample = Math.max(-1, Math.min(1, sample));
} else {
sample = sample > 0 ? 1 - Math.exp(-sample) : -1 + Math.exp(sample);
}
const y = height / 2 - sample * (height / 3);
if (x === 0) ctx.moveTo(x, y);
else ctx.lineTo(x, y);
}
ctx.stroke();
} else if (effectType === 'bitcrusher') {
const crushParams = parameters as BitcrusherParameters;
const bitLevels = Math.pow(2, crushParams.bitDepth ?? 8);
const step = 2 / bitLevels;
// Draw bitcrushed waveform
ctx.beginPath();
let lastY = height / 2;
for (let x = 0; x < width; x++) {
const t = (x / width) * 4 * Math.PI;
let sample = Math.sin(t);
// Quantize
sample = Math.floor(sample / step) * step;
const y = height / 2 - sample * (height / 3);
// Sample and hold effect
if (x % Math.max(1, Math.floor(width / ((crushParams.sampleRate ?? 8000) / 1000))) === 0) {
lastY = y;
}
if (x === 0) ctx.moveTo(x, lastY);
else ctx.lineTo(x, lastY);
}
ctx.stroke();
}
// Draw center line
ctx.strokeStyle = 'rgba(148, 163, 184, 0.2)';
ctx.lineWidth = 1;
ctx.beginPath();
ctx.moveTo(0, height / 2);
ctx.lineTo(width, height / 2);
ctx.stroke();
}, [parameters, effectType, open]);
const handleApply = () => {
onApply({ ...parameters, type: effectType } as AdvancedParameters);
onClose();
};
const handlePreset = (preset: EffectPreset) => {
setParameters(preset.parameters);
};
return (
<Modal open={open} onClose={onClose} title={EFFECT_LABELS[effectType]}>
<div className="space-y-4">
{/* Visual Feedback */}
<div className="rounded-lg border border-border bg-card p-4">
<canvas
ref={canvasRef}
width={400}
height={120}
className="w-full rounded"
/>
</div>
{/* Presets */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">Presets</label>
<div className="grid grid-cols-2 gap-2">
{PRESETS[effectType].map((preset) => (
<Button
key={preset.name}
variant="outline"
size="sm"
onClick={() => handlePreset(preset)}
className="justify-start"
>
{preset.name}
</Button>
))}
</div>
</div>
{/* Effect-specific parameters */}
{effectType === 'pitch' && (
<>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Semitones: {(parameters as PitchShifterParameters).semitones}
</label>
<Slider
value={[(parameters as PitchShifterParameters).semitones]}
onValueChange={([value]) =>
setParameters({ ...parameters, semitones: value })
}
min={-12}
max={12}
step={1}
/>
</div>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Cents: {(parameters as PitchShifterParameters).cents}
</label>
<Slider
value={[(parameters as PitchShifterParameters).cents]}
onValueChange={([value]) =>
setParameters({ ...parameters, cents: value })
}
min={-100}
max={100}
step={1}
/>
</div>
</>
)}
{effectType === 'timestretch' && (
<>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Rate: {((parameters as TimeStretchParameters).rate ?? 1.0).toFixed(2)}x
</label>
<Slider
value={[(parameters as TimeStretchParameters).rate ?? 1.0]}
onValueChange={([value]) =>
setParameters({ ...parameters, rate: value })
}
min={0.5}
max={2.0}
step={0.1}
/>
</div>
<div className="flex items-center space-x-2">
<input
type="checkbox"
id="preservePitch"
checked={(parameters as TimeStretchParameters).preservePitch ?? true}
onChange={(e) =>
setParameters({ ...parameters, preservePitch: e.target.checked })
}
className="h-4 w-4 rounded border-border"
/>
<label htmlFor="preservePitch" className="text-sm text-foreground">
Preserve Pitch
</label>
</div>
</>
)}
{effectType === 'distortion' && (
<>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Type
</label>
<div className="grid grid-cols-3 gap-2">
{(['soft', 'hard', 'tube'] as const).map((type) => (
<Button
key={type}
variant={((parameters as DistortionParameters).type ?? 'soft') === type ? 'secondary' : 'outline'}
size="sm"
onClick={() => setParameters({ ...parameters, type })}
>
{type.charAt(0).toUpperCase() + type.slice(1)}
</Button>
))}
</div>
</div>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Drive: {(((parameters as DistortionParameters).drive ?? 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[(parameters as DistortionParameters).drive ?? 0.5]}
onValueChange={([value]) =>
setParameters({ ...parameters, drive: value })
}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Tone: {(((parameters as DistortionParameters).tone ?? 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[(parameters as DistortionParameters).tone ?? 0.5]}
onValueChange={([value]) =>
setParameters({ ...parameters, tone: value })
}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Output: {(((parameters as DistortionParameters).output ?? 0.7) * 100).toFixed(0)}%
</label>
<Slider
value={[(parameters as DistortionParameters).output ?? 0.7]}
onValueChange={([value]) =>
setParameters({ ...parameters, output: value })
}
min={0}
max={1}
step={0.01}
/>
</div>
</>
)}
{effectType === 'bitcrusher' && (
<>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Bit Depth: {(parameters as BitcrusherParameters).bitDepth ?? 8} bits
</label>
<Slider
value={[(parameters as BitcrusherParameters).bitDepth ?? 8]}
onValueChange={([value]) =>
setParameters({ ...parameters, bitDepth: Math.round(value) })
}
min={1}
max={16}
step={1}
/>
</div>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Sample Rate: {(parameters as BitcrusherParameters).sampleRate ?? 8000} Hz
</label>
<Slider
value={[(parameters as BitcrusherParameters).sampleRate ?? 8000]}
onValueChange={([value]) =>
setParameters({ ...parameters, sampleRate: Math.round(value) })
}
min={100}
max={48000}
step={100}
/>
</div>
</>
)}
{/* Mix control */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Mix: {(parameters.mix * 100).toFixed(0)}%
</label>
<Slider
value={[parameters.mix]}
onValueChange={([value]) =>
setParameters({ ...parameters, mix: value })
}
min={0}
max={1}
step={0.01}
/>
</div>
{/* Actions */}
<div className="flex justify-end space-x-2 pt-4">
<Button variant="outline" onClick={onClose}>
Cancel
</Button>
<Button onClick={handleApply}>Apply</Button>
</div>
</div>
</Modal>
);
}

View File

@@ -0,0 +1,525 @@
'use client';
import * as React from 'react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import { cn } from '@/lib/utils/cn';
import type {
CompressorParameters,
LimiterParameters,
GateParameters,
} from '@/lib/audio/effects/dynamics';
export type DynamicsType = 'compressor' | 'limiter' | 'gate';
export type DynamicsParameters =
| (CompressorParameters & { type: 'compressor' })
| (LimiterParameters & { type: 'limiter' })
| (GateParameters & { type: 'gate' });
export interface EffectPreset {
name: string;
parameters: Partial<CompressorParameters | LimiterParameters | GateParameters>;
}
export interface DynamicsParameterDialogProps {
open: boolean;
onClose: () => void;
effectType: DynamicsType;
onApply: (params: DynamicsParameters) => void;
}
const EFFECT_LABELS: Record<DynamicsType, string> = {
compressor: 'Compressor',
limiter: 'Limiter',
gate: 'Gate/Expander',
};
const EFFECT_DESCRIPTIONS: Record<DynamicsType, string> = {
compressor: 'Reduces dynamic range by lowering loud sounds',
limiter: 'Prevents audio from exceeding threshold',
gate: 'Reduces volume of quiet sounds below threshold',
};
const PRESETS: Record<DynamicsType, EffectPreset[]> = {
compressor: [
{ name: 'Gentle', parameters: { threshold: -20, ratio: 2, attack: 10, release: 100, knee: 6, makeupGain: 3 } },
{ name: 'Medium', parameters: { threshold: -18, ratio: 4, attack: 5, release: 50, knee: 3, makeupGain: 6 } },
{ name: 'Heavy', parameters: { threshold: -15, ratio: 8, attack: 1, release: 30, knee: 0, makeupGain: 10 } },
{ name: 'Vocal', parameters: { threshold: -16, ratio: 3, attack: 5, release: 80, knee: 4, makeupGain: 5 } },
],
limiter: [
{ name: 'Transparent', parameters: { threshold: -3, attack: 0.5, release: 50, makeupGain: 0 } },
{ name: 'Loud', parameters: { threshold: -1, attack: 0.1, release: 20, makeupGain: 2 } },
{ name: 'Broadcast', parameters: { threshold: -0.5, attack: 0.1, release: 10, makeupGain: 0 } },
{ name: 'Mastering', parameters: { threshold: -2, attack: 0.3, release: 30, makeupGain: 1 } },
],
gate: [
{ name: 'Gentle', parameters: { threshold: -40, ratio: 2, attack: 5, release: 100, knee: 6 } },
{ name: 'Medium', parameters: { threshold: -50, ratio: 4, attack: 1, release: 50, knee: 3 } },
{ name: 'Hard', parameters: { threshold: -60, ratio: 10, attack: 0.5, release: 20, knee: 0 } },
{ name: 'Noise Reduction', parameters: { threshold: -45, ratio: 6, attack: 1, release: 80, knee: 4 } },
],
};
export function DynamicsParameterDialog({
open,
onClose,
effectType,
onApply,
}: DynamicsParameterDialogProps) {
const [parameters, setParameters] = React.useState<DynamicsParameters>(() => {
if (effectType === 'compressor') {
return {
type: 'compressor',
threshold: -20,
ratio: 4,
attack: 5,
release: 50,
knee: 3,
makeupGain: 6,
};
} else if (effectType === 'limiter') {
return {
type: 'limiter',
threshold: -3,
attack: 0.5,
release: 50,
makeupGain: 0,
};
} else {
return {
type: 'gate',
threshold: -40,
ratio: 4,
attack: 5,
release: 50,
knee: 3,
};
}
});
const canvasRef = React.useRef<HTMLCanvasElement>(null);
// Get appropriate presets for this effect type
const presets = PRESETS[effectType] || [];
// Update parameters when effect type changes
React.useEffect(() => {
if (effectType === 'compressor') {
setParameters({
type: 'compressor',
threshold: -20,
ratio: 4,
attack: 5,
release: 50,
knee: 3,
makeupGain: 6,
});
} else if (effectType === 'limiter') {
setParameters({
type: 'limiter',
threshold: -3,
attack: 0.5,
release: 50,
makeupGain: 0,
});
} else {
setParameters({
type: 'gate',
threshold: -40,
ratio: 4,
attack: 5,
release: 50,
knee: 3,
});
}
}, [effectType]);
// Draw transfer curve (input level vs output level)
React.useEffect(() => {
if (!open || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Get actual dimensions
const rect = canvas.getBoundingClientRect();
const dpr = window.devicePixelRatio || 1;
// Ensure canvas has dimensions before drawing
if (rect.width === 0 || rect.height === 0) return;
// Set actual size in memory (scaled to account for extra pixel density)
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
// Normalize coordinate system to use CSS pixels
ctx.scale(dpr, dpr);
// Clear any previous drawings first
ctx.clearRect(0, 0, canvas.width, canvas.height);
const width = rect.width;
const height = rect.height;
const padding = 40;
const graphWidth = width - padding * 2;
const graphHeight = height - padding * 2;
// Clear canvas
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('background-color') || '#1a1a1a';
ctx.fillRect(0, 0, width, height);
// Draw axes
ctx.strokeStyle = 'rgba(128, 128, 128, 0.5)';
ctx.lineWidth = 1;
// Horizontal and vertical grid lines
ctx.beginPath();
for (let db = -60; db <= 0; db += 10) {
const x = padding + ((db + 60) / 60) * graphWidth;
const y = padding + graphHeight - ((db + 60) / 60) * graphHeight;
// Vertical grid line
ctx.moveTo(x, padding);
ctx.lineTo(x, padding + graphHeight);
// Horizontal grid line
ctx.moveTo(padding, y);
ctx.lineTo(padding + graphWidth, y);
}
ctx.stroke();
// Draw unity line (input = output)
ctx.strokeStyle = 'rgba(128, 128, 128, 0.3)';
ctx.lineWidth = 1;
ctx.setLineDash([5, 5]);
ctx.beginPath();
ctx.moveTo(padding, padding + graphHeight);
ctx.lineTo(padding + graphWidth, padding);
ctx.stroke();
ctx.setLineDash([]);
// Draw threshold line
const threshold = parameters.threshold;
const thresholdX = padding + ((threshold + 60) / 60) * graphWidth;
ctx.strokeStyle = 'rgba(255, 165, 0, 0.5)';
ctx.lineWidth = 1;
ctx.setLineDash([3, 3]);
ctx.beginPath();
ctx.moveTo(thresholdX, padding);
ctx.lineTo(thresholdX, padding + graphHeight);
ctx.stroke();
ctx.setLineDash([]);
// Draw transfer curve
ctx.strokeStyle = '#3b82f6'; // Primary blue
ctx.lineWidth = 2;
ctx.beginPath();
for (let inputDb = -60; inputDb <= 0; inputDb += 0.5) {
let outputDb = inputDb;
if (effectType === 'compressor' || effectType === 'limiter') {
const ratio = parameters.type === 'limiter' ? 100 : (parameters as CompressorParameters).ratio;
const knee = parameters.type === 'limiter' ? 0 : (parameters as CompressorParameters).knee;
const makeupGain = (parameters as CompressorParameters | LimiterParameters).makeupGain;
if (inputDb > threshold) {
const overThreshold = inputDb - threshold;
// Soft knee calculation
if (knee > 0 && overThreshold < knee / 2) {
const kneeRatio = overThreshold / (knee / 2);
const compressionAmount = (1 - 1 / ratio) * kneeRatio;
outputDb = inputDb - overThreshold * compressionAmount;
} else {
// Above knee - full compression
outputDb = threshold + overThreshold / ratio;
}
outputDb += makeupGain;
} else {
outputDb += makeupGain;
}
} else if (effectType === 'gate') {
const { ratio, knee } = parameters as GateParameters;
if (inputDb < threshold) {
const belowThreshold = threshold - inputDb;
// Soft knee calculation
if (knee > 0 && belowThreshold < knee / 2) {
const kneeRatio = belowThreshold / (knee / 2);
const expansionAmount = (ratio - 1) * kneeRatio;
outputDb = inputDb - belowThreshold * expansionAmount;
} else {
// Below knee - full expansion
outputDb = threshold - belowThreshold * ratio;
}
}
}
// Clamp output
outputDb = Math.max(-60, Math.min(0, outputDb));
const x = padding + ((inputDb + 60) / 60) * graphWidth;
const y = padding + graphHeight - ((outputDb + 60) / 60) * graphHeight;
if (inputDb === -60) {
ctx.moveTo(x, y);
} else {
ctx.lineTo(x, y);
}
}
ctx.stroke();
// Draw axis labels
ctx.fillStyle = 'rgba(156, 163, 175, 0.8)';
ctx.font = '10px sans-serif';
ctx.textAlign = 'center';
// X-axis label
ctx.fillText('Input Level (dB)', width / 2, height - 5);
// Y-axis label (rotated)
ctx.save();
ctx.translate(10, height / 2);
ctx.rotate(-Math.PI / 2);
ctx.fillText('Output Level (dB)', 0, 0);
ctx.restore();
// Tick labels
ctx.textAlign = 'center';
for (let db = -60; db <= 0; db += 20) {
const x = padding + ((db + 60) / 60) * graphWidth;
ctx.fillText(db.toString(), x, height - 20);
}
}, [parameters, effectType, open]);
const handleApply = () => {
onApply(parameters);
onClose();
};
const handlePresetClick = (preset: EffectPreset) => {
setParameters((prev) => ({
...prev,
...preset.parameters,
}));
};
return (
<Modal
open={open}
onClose={onClose}
title={EFFECT_LABELS[effectType] || 'Dynamics Processing'}
description={EFFECT_DESCRIPTIONS[effectType]}
size="lg"
footer={
<>
<Button variant="outline" onClick={onClose}>
Cancel
</Button>
<Button onClick={handleApply}>
Apply Effect
</Button>
</>
}
>
<div className="space-y-4">
{/* Transfer Curve Visualization */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Transfer Curve
</label>
<canvas
ref={canvasRef}
className="w-full h-64 border border-border rounded bg-background"
/>
<p className="text-xs text-muted-foreground">
Shows input vs output levels. Threshold (orange line), ratio, knee, and makeup gain affect this curve.
Attack and release control timing (not shown here).
</p>
</div>
{/* Presets */}
{presets.length > 0 && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Presets
</label>
<div className="grid grid-cols-2 gap-2">
{presets.map((preset) => (
<Button
key={preset.name}
variant="outline"
size="sm"
onClick={() => handlePresetClick(preset)}
className="justify-start"
>
{preset.name}
</Button>
))}
</div>
</div>
)}
{/* Threshold Parameter */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Threshold</span>
<span className="text-muted-foreground font-mono">
{(parameters.threshold ?? -20).toFixed(1)} dB
</span>
</label>
<Slider
value={[(parameters.threshold ?? -20)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, threshold: value }))
}
min={-60}
max={0}
step={0.5}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>-60 dB</span>
<span>0 dB</span>
</div>
</div>
{/* Ratio Parameter (Compressor and Gate only) */}
{(effectType === 'compressor' || effectType === 'gate') && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Ratio</span>
<span className="text-muted-foreground font-mono">
{((parameters as CompressorParameters | GateParameters).ratio ?? 4).toFixed(1)}:1
</span>
</label>
<Slider
value={[((parameters as CompressorParameters | GateParameters).ratio ?? 4)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, ratio: value }))
}
min={1}
max={20}
step={0.5}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>1:1 (None)</span>
<span>20:1 (Hard)</span>
</div>
</div>
)}
{/* Attack Parameter */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Attack</span>
<span className="text-muted-foreground font-mono">
{(parameters.attack ?? 5).toFixed(2)} ms
</span>
</label>
<Slider
value={[Math.log10(parameters.attack ?? 5)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, attack: Math.pow(10, value) }))
}
min={-1}
max={2}
step={0.01}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>0.1 ms (Fast)</span>
<span>100 ms (Slow)</span>
</div>
</div>
{/* Release Parameter */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Release</span>
<span className="text-muted-foreground font-mono">
{(parameters.release ?? 50).toFixed(1)} ms
</span>
</label>
<Slider
value={[Math.log10(parameters.release ?? 50)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, release: Math.pow(10, value) }))
}
min={1}
max={3}
step={0.01}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>10 ms (Fast)</span>
<span>1000 ms (Slow)</span>
</div>
</div>
{/* Knee Parameter (Compressor and Gate only) */}
{(effectType === 'compressor' || effectType === 'gate') && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Knee</span>
<span className="text-muted-foreground font-mono">
{((parameters as CompressorParameters | GateParameters).knee ?? 3).toFixed(1)} dB
</span>
</label>
<Slider
value={[((parameters as CompressorParameters | GateParameters).knee ?? 3)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, knee: value }))
}
min={0}
max={12}
step={0.5}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>0 dB (Hard)</span>
<span>12 dB (Soft)</span>
</div>
</div>
)}
{/* Makeup Gain Parameter (Compressor and Limiter only) */}
{(effectType === 'compressor' || effectType === 'limiter') && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Makeup Gain</span>
<span className="text-muted-foreground font-mono">
{((parameters as CompressorParameters | LimiterParameters).makeupGain ?? 0) > 0 ? '+' : ''}
{((parameters as CompressorParameters | LimiterParameters).makeupGain ?? 0).toFixed(1)} dB
</span>
</label>
<Slider
value={[((parameters as CompressorParameters | LimiterParameters).makeupGain ?? 0)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, makeupGain: value }))
}
min={0}
max={24}
step={0.5}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>0 dB</span>
<span>+24 dB</span>
</div>
</div>
)}
</div>
</Modal>
);
}

View File

@@ -0,0 +1,144 @@
'use client';
import * as React from 'react';
import { X, Search } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { EffectType } from '@/lib/audio/effects/chain';
import { EFFECT_NAMES } from '@/lib/audio/effects/chain';
export interface EffectBrowserProps {
open: boolean;
onClose: () => void;
onSelectEffect: (effectType: EffectType) => void;
}
const EFFECT_CATEGORIES = {
'Dynamics': ['compressor', 'limiter', 'gate'] as EffectType[],
'Filters': ['lowpass', 'highpass', 'bandpass', 'notch', 'lowshelf', 'highshelf', 'peaking'] as EffectType[],
'Time-Based': ['delay', 'reverb', 'chorus', 'flanger', 'phaser'] as EffectType[],
'Distortion': ['distortion', 'bitcrusher'] as EffectType[],
'Pitch & Time': ['pitch', 'timestretch'] as EffectType[],
};
const EFFECT_DESCRIPTIONS: Record<EffectType, string> = {
'compressor': 'Reduce dynamic range and control peaks',
'limiter': 'Prevent audio from exceeding a maximum level',
'gate': 'Reduce noise by cutting low-level signals',
'lowpass': 'Allow frequencies below cutoff to pass',
'highpass': 'Allow frequencies above cutoff to pass',
'bandpass': 'Allow frequencies within a range to pass',
'notch': 'Remove a specific frequency range',
'lowshelf': 'Boost or cut low frequencies',
'highshelf': 'Boost or cut high frequencies',
'peaking': 'Boost or cut a specific frequency band',
'delay': 'Create echoes and rhythmic repeats',
'reverb': 'Simulate acoustic space and ambience',
'chorus': 'Thicken sound with subtle pitch variations',
'flanger': 'Create sweeping comb filter effects',
'phaser': 'Create phase-shifted modulation effects',
'distortion': 'Add harmonic saturation and grit',
'bitcrusher': 'Reduce bit depth for lo-fi effects',
'pitch': 'Shift pitch without changing tempo',
'timestretch': 'Change tempo without affecting pitch',
};
export function EffectBrowser({ open, onClose, onSelectEffect }: EffectBrowserProps) {
const [search, setSearch] = React.useState('');
const [selectedCategory, setSelectedCategory] = React.useState<string | null>(null);
const handleSelectEffect = (effectType: EffectType) => {
onSelectEffect(effectType);
onClose();
setSearch('');
setSelectedCategory(null);
};
const filteredCategories = React.useMemo(() => {
if (!search) return EFFECT_CATEGORIES;
const searchLower = search.toLowerCase();
const filtered: Record<string, EffectType[]> = {};
Object.entries(EFFECT_CATEGORIES).forEach(([category, effects]) => {
const matchingEffects = effects.filter((effect) =>
EFFECT_NAMES[effect].toLowerCase().includes(searchLower) ||
EFFECT_DESCRIPTIONS[effect].toLowerCase().includes(searchLower)
);
if (matchingEffects.length > 0) {
filtered[category] = matchingEffects;
}
});
return filtered;
}, [search]);
if (!open) return null;
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-black/50" onClick={onClose}>
<div
className="bg-card border border-border rounded-lg shadow-lg w-full max-w-2xl max-h-[80vh] flex flex-col"
onClick={(e) => e.stopPropagation()}
>
{/* Header */}
<div className="flex items-center justify-between p-4 border-b border-border">
<h2 className="text-lg font-semibold text-foreground">Add Effect</h2>
<Button variant="ghost" size="icon-sm" onClick={onClose}>
<X className="h-4 w-4" />
</Button>
</div>
{/* Search */}
<div className="p-4 border-b border-border">
<div className="relative">
<Search className="absolute left-3 top-1/2 -translate-y-1/2 h-4 w-4 text-muted-foreground" />
<input
type="text"
placeholder="Search effects..."
value={search}
onChange={(e) => setSearch(e.target.value)}
className="w-full pl-10 pr-4 py-2 bg-background border border-border rounded-md text-sm text-foreground placeholder:text-muted-foreground focus:outline-none focus:ring-2 focus:ring-primary"
autoFocus
/>
</div>
</div>
{/* Content */}
<div className="flex-1 overflow-y-auto custom-scrollbar p-4">
<div className="space-y-6">
{Object.entries(filteredCategories).map(([category, effects]) => (
<div key={category}>
<h3 className="text-sm font-semibold text-muted-foreground uppercase mb-2">
{category}
</h3>
<div className="grid grid-cols-2 gap-2">
{effects.map((effect) => (
<button
key={effect}
onClick={() => handleSelectEffect(effect)}
className={cn(
'px-4 py-3 text-left rounded-md border transition-colors',
'hover:bg-accent hover:border-primary',
'border-border bg-card text-foreground'
)}
>
<div className="font-medium text-sm">{EFFECT_NAMES[effect]}</div>
<div className="text-xs text-muted-foreground">{EFFECT_DESCRIPTIONS[effect]}</div>
</button>
))}
</div>
</div>
))}
</div>
{Object.keys(filteredCategories).length === 0 && (
<div className="text-center py-8 text-muted-foreground">
No effects found matching "{search}"
</div>
)}
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,132 @@
'use client';
import * as React from 'react';
import { ChevronLeft, ChevronRight, Power, X } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { ChainEffect } from '@/lib/audio/effects/chain';
import { EffectParameters } from './EffectParameters';
export interface EffectDeviceProps {
effect: ChainEffect;
onToggleEnabled?: () => void;
onRemove?: () => void;
onUpdateParameters?: (parameters: any) => void;
onToggleExpanded?: () => void;
trackId?: string;
isPlaying?: boolean;
onParameterTouched?: (trackId: string, laneId: string, touched: boolean) => void;
automationLanes?: Array<{ id: string; parameterId: string; mode: string }>;
}
export function EffectDevice({
effect,
onToggleEnabled,
onRemove,
onUpdateParameters,
onToggleExpanded,
trackId,
isPlaying,
onParameterTouched,
automationLanes,
}: EffectDeviceProps) {
const isExpanded = effect.expanded || false;
return (
<div
className={cn(
'flex-shrink-0 flex flex-col h-full transition-all duration-200 overflow-hidden rounded-md',
effect.enabled
? 'bg-card border-l border-r border-b border-border'
: 'bg-card/40 border-l border-r border-b border-border/50 opacity-60 hover:opacity-80',
isExpanded ? 'min-w-96' : 'w-10'
)}
>
{!isExpanded ? (
/* Collapsed State */
<>
{/* Colored top indicator */}
<div className={cn('h-0.5 w-full', effect.enabled ? 'bg-primary' : 'bg-muted-foreground/20')} />
<button
onClick={onToggleExpanded}
className="w-full h-full flex flex-col items-center justify-between py-1 hover:bg-primary/10 transition-colors group"
title={`Expand ${effect.name}`}
>
<ChevronRight className="h-3 w-3 flex-shrink-0 text-muted-foreground group-hover:text-primary transition-colors" />
<span
className="flex-1 text-xs font-medium whitespace-nowrap text-muted-foreground group-hover:text-primary transition-colors"
style={{
writingMode: 'vertical-rl',
textOrientation: 'mixed',
}}
>
{effect.name}
</span>
<div
className={cn(
'w-1.5 h-1.5 rounded-full flex-shrink-0 mb-1',
effect.enabled ? 'bg-primary shadow-sm shadow-primary/50' : 'bg-muted-foreground/30'
)}
title={effect.enabled ? 'Enabled' : 'Disabled'}
/>
</button>
</>
) : (
<>
{/* Colored top indicator */}
<div className={cn('h-0.5 w-full', effect.enabled ? 'bg-primary' : 'bg-muted-foreground/20')} />
{/* Full-Width Header Row */}
<div className="flex items-center gap-1 px-2 py-1.5 border-b border-border/50 bg-muted/30 flex-shrink-0">
<Button
variant="ghost"
size="icon-sm"
onClick={onToggleExpanded}
title="Collapse device"
className="h-5 w-5 flex-shrink-0"
>
<ChevronLeft className="h-3 w-3" />
</Button>
<span className="text-xs font-semibold flex-1 min-w-0 truncate">{effect.name}</span>
<Button
variant="ghost"
size="icon-sm"
onClick={onToggleEnabled}
title={effect.enabled ? 'Disable effect' : 'Enable effect'}
className="h-5 w-5 flex-shrink-0"
>
<Power
className={cn(
'h-3 w-3',
effect.enabled ? 'text-primary' : 'text-muted-foreground'
)}
/>
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onRemove}
title="Remove effect"
className="h-5 w-5 flex-shrink-0"
>
<X className="h-3 w-3 text-destructive" />
</Button>
</div>
{/* Device Body */}
<div className="flex-1 min-h-0 overflow-y-auto custom-scrollbar p-3 bg-card/50">
<EffectParameters
effect={effect}
onUpdateParameters={onUpdateParameters}
trackId={trackId}
isPlaying={isPlaying}
onParameterTouched={onParameterTouched}
automationLanes={automationLanes}
/>
</div>
</>
)}
</div>
);
}

View File

@@ -0,0 +1,391 @@
'use client';
import * as React from 'react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import { cn } from '@/lib/utils/cn';
import type { FilterType } from '@/lib/audio/effects/filters';
export interface FilterParameters {
type: FilterType;
frequency: number;
Q?: number;
gain?: number;
}
export interface EffectPreset {
name: string;
parameters: Partial<FilterParameters>;
}
export interface EffectParameterDialogProps {
open: boolean;
onClose: () => void;
effectType: 'lowpass' | 'highpass' | 'bandpass' | 'notch' | 'lowshelf' | 'highshelf' | 'peaking';
onApply: (params: FilterParameters) => void;
sampleRate?: number;
}
const EFFECT_LABELS: Record<string, string> = {
lowpass: 'Low-Pass Filter',
highpass: 'High-Pass Filter',
bandpass: 'Band-Pass Filter',
notch: 'Notch Filter',
lowshelf: 'Low Shelf Filter',
highshelf: 'High Shelf Filter',
peaking: 'Peaking EQ',
};
const EFFECT_DESCRIPTIONS: Record<string, string> = {
lowpass: 'Removes high frequencies above the cutoff',
highpass: 'Removes low frequencies below the cutoff',
bandpass: 'Isolates frequencies around the center frequency',
notch: 'Removes frequencies around the center frequency',
lowshelf: 'Boosts or cuts low frequencies',
highshelf: 'Boosts or cuts high frequencies',
peaking: 'Boosts or cuts a specific frequency band',
};
const PRESETS: Record<string, EffectPreset[]> = {
lowpass: [
{ name: 'Telephone', parameters: { frequency: 3000, Q: 0.7 } },
{ name: 'Radio', parameters: { frequency: 5000, Q: 1.0 } },
{ name: 'Warm', parameters: { frequency: 8000, Q: 0.5 } },
{ name: 'Muffled', parameters: { frequency: 1000, Q: 1.5 } },
],
highpass: [
{ name: 'Rumble Removal', parameters: { frequency: 80, Q: 0.7 } },
{ name: 'Voice Clarity', parameters: { frequency: 150, Q: 1.0 } },
{ name: 'Thin', parameters: { frequency: 300, Q: 0.5 } },
],
bandpass: [
{ name: 'Telephone', parameters: { frequency: 1000, Q: 2.0 } },
{ name: 'Vocal Range', parameters: { frequency: 2000, Q: 1.0 } },
{ name: 'Narrow', parameters: { frequency: 1000, Q: 10.0 } },
],
notch: [
{ name: '60Hz Hum', parameters: { frequency: 60, Q: 10.0 } },
{ name: '50Hz Hum', parameters: { frequency: 50, Q: 10.0 } },
{ name: 'Narrow Notch', parameters: { frequency: 1000, Q: 20.0 } },
],
lowshelf: [
{ name: 'Bass Boost', parameters: { frequency: 200, gain: 6 } },
{ name: 'Bass Cut', parameters: { frequency: 200, gain: -6 } },
{ name: 'Warmth', parameters: { frequency: 150, gain: 3 } },
],
highshelf: [
{ name: 'Treble Boost', parameters: { frequency: 3000, gain: 6 } },
{ name: 'Treble Cut', parameters: { frequency: 3000, gain: -6 } },
{ name: 'Brightness', parameters: { frequency: 5000, gain: 3 } },
],
peaking: [
{ name: 'Presence Boost', parameters: { frequency: 3000, Q: 1.0, gain: 4 } },
{ name: 'Vocal Cut', parameters: { frequency: 2000, Q: 2.0, gain: -3 } },
{ name: 'Narrow Boost', parameters: { frequency: 1000, Q: 5.0, gain: 6 } },
],
};
export function EffectParameterDialog({
open,
onClose,
effectType,
onApply,
sampleRate = 48000,
}: EffectParameterDialogProps) {
const [parameters, setParameters] = React.useState<FilterParameters>(() => ({
type: effectType,
frequency: effectType === 'lowpass' ? 1000 : effectType === 'highpass' ? 100 : 1000,
Q: 1.0,
gain: 0,
}));
const canvasRef = React.useRef<HTMLCanvasElement>(null);
// Get appropriate presets for this effect type
const presets = PRESETS[effectType] || [];
// Update parameters when effect type changes
React.useEffect(() => {
setParameters((prev) => ({ ...prev, type: effectType }));
}, [effectType]);
// Draw frequency response curve
React.useEffect(() => {
if (!canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Get actual dimensions
const rect = canvas.getBoundingClientRect();
const dpr = window.devicePixelRatio || 1;
// Set actual size in memory (scaled to account for extra pixel density)
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
// Normalize coordinate system to use CSS pixels
ctx.scale(dpr, dpr);
const width = rect.width;
const height = rect.height;
const nyquist = sampleRate / 2;
// Clear canvas
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('background-color') || '#1a1a1a';
ctx.fillRect(0, 0, width, height);
// Draw grid
ctx.strokeStyle = 'rgba(128, 128, 128, 0.2)';
ctx.lineWidth = 1;
// Horizontal grid lines (dB)
for (let db = -24; db <= 24; db += 6) {
const y = height / 2 - (db / 24) * (height / 2);
ctx.beginPath();
ctx.moveTo(0, y);
ctx.lineTo(width, y);
ctx.stroke();
}
// Vertical grid lines (frequency)
const frequencies = [100, 1000, 10000];
frequencies.forEach((freq) => {
const x = (Math.log10(freq) - 1) / (Math.log10(nyquist) - 1) * width;
ctx.beginPath();
ctx.moveTo(x, 0);
ctx.lineTo(x, height);
ctx.stroke();
});
// Draw frequency response curve
ctx.strokeStyle = '#3b82f6'; // Primary blue
ctx.lineWidth = 2;
ctx.beginPath();
for (let x = 0; x < width; x++) {
const freq = Math.pow(10, 1 + (x / width) * (Math.log10(nyquist) - 1));
const magnitude = getFilterMagnitude(freq, parameters, sampleRate);
const db = 20 * Math.log10(Math.max(magnitude, 0.0001)); // Prevent log(0)
const y = height / 2 - (db / 24) * (height / 2);
if (x === 0) {
ctx.moveTo(x, y);
} else {
ctx.lineTo(x, y);
}
}
ctx.stroke();
// Draw 0dB line
ctx.strokeStyle = 'rgba(156, 163, 175, 0.5)'; // Muted foreground
ctx.lineWidth = 1;
ctx.setLineDash([5, 5]);
ctx.beginPath();
ctx.moveTo(0, height / 2);
ctx.lineTo(width, height / 2);
ctx.stroke();
ctx.setLineDash([]);
}, [parameters, sampleRate]);
const handleApply = () => {
onApply(parameters);
onClose();
};
const handlePresetClick = (preset: EffectPreset) => {
setParameters((prev) => ({
...prev,
...preset.parameters,
}));
};
const needsQ = ['lowpass', 'highpass', 'bandpass', 'notch', 'peaking'].includes(effectType);
const needsGain = ['lowshelf', 'highshelf', 'peaking'].includes(effectType);
return (
<Modal
open={open}
onClose={onClose}
title={EFFECT_LABELS[effectType] || 'Effect Parameters'}
description={EFFECT_DESCRIPTIONS[effectType]}
size="lg"
footer={
<>
<Button variant="outline" onClick={onClose}>
Cancel
</Button>
<Button onClick={handleApply}>
Apply Effect
</Button>
</>
}
>
<div className="space-y-4">
{/* Frequency Response Visualization */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Frequency Response
</label>
<canvas
ref={canvasRef}
className="w-full h-48 border border-border rounded bg-background"
/>
<div className="flex justify-between text-xs text-muted-foreground px-2">
<span>100 Hz</span>
<span>1 kHz</span>
<span>10 kHz</span>
</div>
</div>
{/* Presets */}
{presets.length > 0 && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Presets
</label>
<div className="grid grid-cols-2 gap-2">
{presets.map((preset) => (
<Button
key={preset.name}
variant="outline"
size="sm"
onClick={() => handlePresetClick(preset)}
className="justify-start"
>
{preset.name}
</Button>
))}
</div>
</div>
)}
{/* Frequency Parameter */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Frequency</span>
<span className="text-muted-foreground font-mono">
{parameters.frequency.toFixed(0)} Hz
</span>
</label>
<Slider
value={[Math.log10(parameters.frequency)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, frequency: Math.pow(10, value) }))
}
min={1}
max={Math.log10(sampleRate / 2)}
step={0.01}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>10 Hz</span>
<span>{(sampleRate / 2).toFixed(0)} Hz</span>
</div>
</div>
{/* Q Parameter */}
{needsQ && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Q (Resonance)</span>
<span className="text-muted-foreground font-mono">
{parameters.Q?.toFixed(2)}
</span>
</label>
<Slider
value={[parameters.Q || 1.0]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, Q: value }))
}
min={0.1}
max={20}
step={0.1}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>0.1 (Gentle)</span>
<span>20 (Sharp)</span>
</div>
</div>
)}
{/* Gain Parameter */}
{needsGain && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Gain</span>
<span className="text-muted-foreground font-mono">
{parameters.gain && parameters.gain > 0 ? '+' : ''}
{parameters.gain?.toFixed(1)} dB
</span>
</label>
<Slider
value={[parameters.gain || 0]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, gain: value }))
}
min={-24}
max={24}
step={0.5}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>-24 dB</span>
<span>+24 dB</span>
</div>
</div>
)}
</div>
</Modal>
);
}
/**
* Calculate filter magnitude at a given frequency
*/
function getFilterMagnitude(
freq: number,
params: FilterParameters,
sampleRate: number
): number {
const w = (2 * Math.PI * freq) / sampleRate;
const w0 = (2 * Math.PI * params.frequency) / sampleRate;
const Q = params.Q || 1.0;
const gain = params.gain || 0;
const A = Math.pow(10, gain / 40);
// Simplified magnitude calculation for different filter types
switch (params.type) {
case 'lowpass': {
const ratio = freq / params.frequency;
return 1 / Math.sqrt(1 + Math.pow(ratio * Q, 2 * 2));
}
case 'highpass': {
const ratio = params.frequency / freq;
return 1 / Math.sqrt(1 + Math.pow(ratio * Q, 2 * 2));
}
case 'bandpass': {
const ratio = Math.abs(freq - params.frequency) / (params.frequency / Q);
return 1 / Math.sqrt(1 + Math.pow(ratio, 2));
}
case 'notch': {
const ratio = Math.abs(freq - params.frequency) / (params.frequency / Q);
return Math.abs(ratio) / Math.sqrt(1 + Math.pow(ratio, 2));
}
case 'lowshelf':
case 'highshelf':
case 'peaking': {
// Simplified for visualization
const dist = Math.abs(Math.log(freq / params.frequency));
const influence = Math.exp(-dist * Q);
return 1 + (A - 1) * influence;
}
default:
return 1;
}
}

View File

@@ -0,0 +1,777 @@
'use client';
import * as React from 'react';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import type { ChainEffect, EffectType } from '@/lib/audio/effects/chain';
import type {
PitchShifterParameters,
TimeStretchParameters,
DistortionParameters,
BitcrusherParameters,
} from '@/lib/audio/effects/advanced';
import type {
CompressorParameters,
LimiterParameters,
GateParameters,
} from '@/lib/audio/effects/dynamics';
import type {
DelayParameters,
ReverbParameters,
ChorusParameters,
FlangerParameters,
PhaserParameters,
} from '@/lib/audio/effects/time-based';
import type { FilterOptions } from '@/lib/audio/effects/filters';
export interface EffectParametersProps {
effect: ChainEffect;
onUpdateParameters?: (parameters: any) => void;
trackId?: string;
isPlaying?: boolean;
onParameterTouched?: (trackId: string, laneId: string, touched: boolean) => void;
automationLanes?: Array<{ id: string; parameterId: string; mode: string }>;
}
export function EffectParameters({
effect,
onUpdateParameters,
trackId,
isPlaying,
onParameterTouched,
automationLanes = []
}: EffectParametersProps) {
const params = effect.parameters || {};
const updateParam = (key: string, value: any) => {
if (onUpdateParameters) {
onUpdateParameters({ ...params, [key]: value });
}
};
// Memoize touch handlers for all parameters
const touchHandlers = React.useMemo(() => {
if (!trackId || !isPlaying || !onParameterTouched || !automationLanes) {
return {};
}
const handlers: Record<string, { onTouchStart: () => void; onTouchEnd: () => void }> = {};
automationLanes.forEach(lane => {
if (!lane.parameterId.startsWith(`effect.${effect.id}.`)) {
return;
}
// For effect parameters, write mode works like touch mode
if (lane.mode !== 'touch' && lane.mode !== 'latch' && lane.mode !== 'write') {
return;
}
// Extract parameter name from parameterId (effect.{effectId}.{paramName})
const parts = lane.parameterId.split('.');
if (parts.length !== 3) return;
const paramName = parts[2];
handlers[paramName] = {
onTouchStart: () => {
queueMicrotask(() => onParameterTouched(trackId, lane.id, true));
},
onTouchEnd: () => {
queueMicrotask(() => onParameterTouched(trackId, lane.id, false));
},
};
});
return handlers;
}, [trackId, isPlaying, onParameterTouched, effect.id, automationLanes]);
// Helper to get touch handlers for a parameter
const getTouchHandlers = (paramName: string) => {
return touchHandlers[paramName] || {};
};
// Filter effects
if (['lowpass', 'highpass', 'bandpass', 'notch', 'lowshelf', 'highshelf', 'peaking'].includes(effect.type)) {
const filterParams = params as FilterOptions;
return (
<div className="grid grid-cols-2 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Frequency: {Math.round(filterParams.frequency || 1000)} Hz
</label>
<Slider
value={[filterParams.frequency || 1000]}
onValueChange={([value]) => updateParam('frequency', value)}
min={20}
max={20000}
step={1}
{...getTouchHandlers('frequency')}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Q: {(filterParams.Q || 1).toFixed(2)}
</label>
<Slider
value={[filterParams.Q || 1]}
onValueChange={([value]) => updateParam('Q', value)}
min={0.1}
max={20}
step={0.1}
{...getTouchHandlers('Q')}
/>
</div>
{['lowshelf', 'highshelf', 'peaking'].includes(effect.type) && (
<div className="space-y-1 col-span-2">
<label className="text-xs font-medium">
Gain: {(filterParams.gain || 0).toFixed(1)} dB
</label>
<Slider
value={[filterParams.gain || 0]}
onValueChange={([value]) => updateParam('gain', value)}
min={-40}
max={40}
step={0.5}
{...getTouchHandlers('gain')}
/>
</div>
)}
</div>
);
}
// Compressor
if (effect.type === 'compressor') {
const compParams = params as CompressorParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Threshold: {(compParams.threshold || -24).toFixed(1)} dB
</label>
<Slider
value={[compParams.threshold || -24]}
onValueChange={([value]) => updateParam('threshold', value)}
min={-60}
max={0}
step={0.5}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Ratio: {(compParams.ratio || 4).toFixed(1)}:1
</label>
<Slider
value={[compParams.ratio || 4]}
onValueChange={([value]) => updateParam('ratio', value)}
min={1}
max={20}
step={0.5}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Knee: {(compParams.knee || 30).toFixed(1)} dB
</label>
<Slider
value={[compParams.knee || 30]}
onValueChange={([value]) => updateParam('knee', value)}
min={0}
max={40}
step={1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Attack: {(compParams.attack || 0.003).toFixed(3)} s
</label>
<Slider
value={[compParams.attack || 0.003]}
onValueChange={([value]) => updateParam('attack', value)}
min={0.001}
max={1}
step={0.001}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Release: {(compParams.release || 0.25).toFixed(3)} s
</label>
<Slider
value={[compParams.release || 0.25]}
onValueChange={([value]) => updateParam('release', value)}
min={0.01}
max={3}
step={0.01}
/>
</div>
</div>
);
}
// Limiter
if (effect.type === 'limiter') {
const limParams = params as LimiterParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Threshold: {(limParams.threshold || -3).toFixed(1)} dB
</label>
<Slider
value={[limParams.threshold || -3]}
onValueChange={([value]) => updateParam('threshold', value)}
min={-30}
max={0}
step={0.5}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Release: {(limParams.release || 0.05).toFixed(3)} s
</label>
<Slider
value={[limParams.release || 0.05]}
onValueChange={([value]) => updateParam('release', value)}
min={0.01}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Makeup: {(limParams.makeupGain || 0).toFixed(1)} dB
</label>
<Slider
value={[limParams.makeupGain || 0]}
onValueChange={([value]) => updateParam('makeupGain', value)}
min={0}
max={20}
step={0.5}
/>
</div>
</div>
);
}
// Gate
if (effect.type === 'gate') {
const gateParams = params as GateParameters;
return (
<div className="grid grid-cols-2 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Threshold: {(gateParams.threshold || -40).toFixed(1)} dB
</label>
<Slider
value={[gateParams.threshold || -40]}
onValueChange={([value]) => updateParam('threshold', value)}
min={-80}
max={0}
step={0.5}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Ratio: {(gateParams.ratio || 10).toFixed(1)}:1
</label>
<Slider
value={[gateParams.ratio || 10]}
onValueChange={([value]) => updateParam('ratio', value)}
min={1}
max={20}
step={0.5}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Attack: {(gateParams.attack || 0.001).toFixed(3)} s
</label>
<Slider
value={[gateParams.attack || 0.001]}
onValueChange={([value]) => updateParam('attack', value)}
min={0.0001}
max={0.5}
step={0.0001}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Release: {(gateParams.release || 0.1).toFixed(3)} s
</label>
<Slider
value={[gateParams.release || 0.1]}
onValueChange={([value]) => updateParam('release', value)}
min={0.01}
max={3}
step={0.01}
/>
</div>
</div>
);
}
// Delay
if (effect.type === 'delay') {
const delayParams = params as DelayParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Time: {(delayParams.time || 0.5).toFixed(3)} s
</label>
<Slider
value={[delayParams.time || 0.5]}
onValueChange={([value]) => updateParam('time', value)}
min={0.001}
max={2}
step={0.001}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Feedback: {((delayParams.feedback || 0.3) * 100).toFixed(0)}%
</label>
<Slider
value={[delayParams.feedback || 0.3]}
onValueChange={([value]) => updateParam('feedback', value)}
min={0}
max={0.9}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((delayParams.mix || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[delayParams.mix || 0.5]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Reverb
if (effect.type === 'reverb') {
const reverbParams = params as ReverbParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Room Size: {((reverbParams.roomSize || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[reverbParams.roomSize || 0.5]}
onValueChange={([value]) => updateParam('roomSize', value)}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Damping: {((reverbParams.damping || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[reverbParams.damping || 0.5]}
onValueChange={([value]) => updateParam('damping', value)}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((reverbParams.mix || 0.3) * 100).toFixed(0)}%
</label>
<Slider
value={[reverbParams.mix || 0.3]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Chorus
if (effect.type === 'chorus') {
const chorusParams = params as ChorusParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Rate: {(chorusParams.rate || 1.5).toFixed(2)} Hz
</label>
<Slider
value={[chorusParams.rate || 1.5]}
onValueChange={([value]) => updateParam('rate', value)}
min={0.1}
max={10}
step={0.1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Depth: {((chorusParams.depth || 0.002) * 1000).toFixed(2)} ms
</label>
<Slider
value={[chorusParams.depth || 0.002]}
onValueChange={([value]) => updateParam('depth', value)}
min={0.0001}
max={0.01}
step={0.0001}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((chorusParams.mix || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[chorusParams.mix || 0.5]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Flanger
if (effect.type === 'flanger') {
const flangerParams = params as FlangerParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Rate: {(flangerParams.rate || 0.5).toFixed(2)} Hz
</label>
<Slider
value={[flangerParams.rate || 0.5]}
onValueChange={([value]) => updateParam('rate', value)}
min={0.1}
max={10}
step={0.1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Depth: {((flangerParams.depth || 0.002) * 1000).toFixed(2)} ms
</label>
<Slider
value={[flangerParams.depth || 0.002]}
onValueChange={([value]) => updateParam('depth', value)}
min={0.0001}
max={0.01}
step={0.0001}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Feedback: {((flangerParams.feedback || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[flangerParams.feedback || 0.5]}
onValueChange={([value]) => updateParam('feedback', value)}
min={0}
max={0.95}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((flangerParams.mix || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[flangerParams.mix || 0.5]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Phaser
if (effect.type === 'phaser') {
const phaserParams = params as PhaserParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Rate: {(phaserParams.rate || 0.5).toFixed(2)} Hz
</label>
<Slider
value={[phaserParams.rate || 0.5]}
onValueChange={([value]) => updateParam('rate', value)}
min={0.1}
max={10}
step={0.1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Depth: {((phaserParams.depth || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[phaserParams.depth || 0.5]}
onValueChange={([value]) => updateParam('depth', value)}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Stages: {phaserParams.stages || 4}
</label>
<Slider
value={[phaserParams.stages || 4]}
onValueChange={([value]) => updateParam('stages', Math.round(value))}
min={2}
max={12}
step={1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((phaserParams.mix || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[phaserParams.mix || 0.5]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Pitch Shifter
if (effect.type === 'pitch') {
const pitchParams = params as PitchShifterParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Semitones: {pitchParams.semitones || 0}
</label>
<Slider
value={[pitchParams.semitones || 0]}
onValueChange={([value]) => updateParam('semitones', Math.round(value))}
min={-12}
max={12}
step={1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Cents: {pitchParams.cents || 0}
</label>
<Slider
value={[pitchParams.cents || 0]}
onValueChange={([value]) => updateParam('cents', Math.round(value))}
min={-100}
max={100}
step={1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((pitchParams.mix || 1) * 100).toFixed(0)}%
</label>
<Slider
value={[pitchParams.mix || 1]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Time Stretch
if (effect.type === 'timestretch') {
const stretchParams = params as TimeStretchParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Rate: {(stretchParams.rate || 1).toFixed(2)}x
</label>
<Slider
value={[stretchParams.rate || 1]}
onValueChange={([value]) => updateParam('rate', value)}
min={0.5}
max={2}
step={0.01}
/>
</div>
<div className="flex items-center gap-2 py-1 px-2 border-b border-border/30">
<input
type="checkbox"
id={`preserve-pitch-${effect.id}`}
checked={stretchParams.preservePitch ?? true}
onChange={(e) => updateParam('preservePitch', e.target.checked)}
className="h-3 w-3 rounded border-border"
/>
<label htmlFor={`preserve-pitch-${effect.id}`} className="text-xs">
Preserve Pitch
</label>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((stretchParams.mix || 1) * 100).toFixed(0)}%
</label>
<Slider
value={[stretchParams.mix || 1]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Distortion
if (effect.type === 'distortion') {
const distParams = params as DistortionParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">Type</label>
<div className="grid grid-cols-3 gap-1">
{(['soft', 'hard', 'tube'] as const).map((type) => (
<Button
key={type}
variant={(distParams.type || 'soft') === type ? 'secondary' : 'outline'}
size="sm"
onClick={() => updateParam('type', type)}
className="text-xs py-1 h-auto"
>
{type}
</Button>
))}
</div>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Drive: {((distParams.drive || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[distParams.drive || 0.5]}
onValueChange={([value]) => updateParam('drive', value)}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Tone: {((distParams.tone || 0.5) * 100).toFixed(0)}%
</label>
<Slider
value={[distParams.tone || 0.5]}
onValueChange={([value]) => updateParam('tone', value)}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Output: {((distParams.output || 0.7) * 100).toFixed(0)}%
</label>
<Slider
value={[distParams.output || 0.7]}
onValueChange={([value]) => updateParam('output', value)}
min={0}
max={1}
step={0.01}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((distParams.mix || 1) * 100).toFixed(0)}%
</label>
<Slider
value={[distParams.mix || 1]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Bitcrusher
if (effect.type === 'bitcrusher') {
const crushParams = params as BitcrusherParameters;
return (
<div className="grid grid-cols-3 gap-x-4 gap-y-2">
<div className="space-y-1">
<label className="text-xs font-medium">
Bit Depth: {crushParams.bitDepth || 8} bits
</label>
<Slider
value={[crushParams.bitDepth || 8]}
onValueChange={([value]) => updateParam('bitDepth', Math.round(value))}
min={1}
max={16}
step={1}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Sample Rate: {crushParams.sampleRate || 8000} Hz
</label>
<Slider
value={[crushParams.sampleRate || 8000]}
onValueChange={([value]) => updateParam('sampleRate', Math.round(value))}
min={100}
max={48000}
step={100}
/>
</div>
<div className="space-y-1">
<label className="text-xs font-medium">
Mix: {((crushParams.mix || 1) * 100).toFixed(0)}%
</label>
<Slider
value={[crushParams.mix || 1]}
onValueChange={([value]) => updateParam('mix', value)}
min={0}
max={1}
step={0.01}
/>
</div>
</div>
);
}
// Fallback for unknown effects
return (
<div className="text-xs text-muted-foreground/70 italic text-center py-4">
No parameters available
</div>
);
}

View File

@@ -0,0 +1,144 @@
'use client';
import * as React from 'react';
import { GripVertical, Power, PowerOff, Trash2, Settings } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { ChainEffect, EffectChain } from '@/lib/audio/effects/chain';
import { EFFECT_NAMES } from '@/lib/audio/effects/chain';
export interface EffectRackProps {
chain: EffectChain;
onToggleEffect: (effectId: string) => void;
onRemoveEffect: (effectId: string) => void;
onReorderEffects: (fromIndex: number, toIndex: number) => void;
onEditEffect?: (effect: ChainEffect) => void;
className?: string;
}
export function EffectRack({
chain,
onToggleEffect,
onRemoveEffect,
onReorderEffects,
onEditEffect,
className,
}: EffectRackProps) {
const [draggedIndex, setDraggedIndex] = React.useState<number | null>(null);
const [dragOverIndex, setDragOverIndex] = React.useState<number | null>(null);
const handleDragStart = (e: React.DragEvent, index: number) => {
setDraggedIndex(index);
e.dataTransfer.effectAllowed = 'move';
};
const handleDragOver = (e: React.DragEvent, index: number) => {
e.preventDefault();
if (draggedIndex === null || draggedIndex === index) return;
setDragOverIndex(index);
};
const handleDrop = (e: React.DragEvent, index: number) => {
e.preventDefault();
if (draggedIndex === null || draggedIndex === index) return;
onReorderEffects(draggedIndex, index);
setDraggedIndex(null);
setDragOverIndex(null);
};
const handleDragEnd = () => {
setDraggedIndex(null);
setDragOverIndex(null);
};
if (chain.effects.length === 0) {
return (
<div className={cn('p-4 text-center', className)}>
<p className="text-sm text-muted-foreground">
No effects in chain. Add effects from the side panel to get started.
</p>
</div>
);
}
return (
<div className={cn('space-y-2', className)}>
{chain.effects.map((effect, index) => (
<div
key={effect.id}
draggable
onDragStart={(e) => handleDragStart(e, index)}
onDragOver={(e) => handleDragOver(e, index)}
onDrop={(e) => handleDrop(e, index)}
onDragEnd={handleDragEnd}
className={cn(
'flex items-center gap-2 p-3 rounded-lg border transition-all',
effect.enabled
? 'bg-card border-border'
: 'bg-muted/50 border-border/50 opacity-60',
draggedIndex === index && 'opacity-50',
dragOverIndex === index && 'border-primary'
)}
>
{/* Drag Handle */}
<div
className="cursor-grab active:cursor-grabbing text-muted-foreground hover:text-foreground"
title="Drag to reorder"
>
<GripVertical className="h-4 w-4" />
</div>
{/* Effect Info */}
<div className="flex-1 min-w-0">
<div className="text-sm font-medium text-foreground truncate">
{effect.name}
</div>
<div className="text-xs text-muted-foreground">
{EFFECT_NAMES[effect.type]}
</div>
</div>
{/* Actions */}
<div className="flex items-center gap-1">
{/* Edit Button (if edit handler provided) */}
{onEditEffect && (
<Button
variant="ghost"
size="icon-sm"
onClick={() => onEditEffect(effect)}
title="Edit parameters"
>
<Settings className="h-4 w-4" />
</Button>
)}
{/* Toggle Enable/Disable */}
<Button
variant="ghost"
size="icon-sm"
onClick={() => onToggleEffect(effect.id)}
title={effect.enabled ? 'Disable effect' : 'Enable effect'}
>
{effect.enabled ? (
<Power className="h-4 w-4 text-success" />
) : (
<PowerOff className="h-4 w-4 text-muted-foreground" />
)}
</Button>
{/* Remove */}
<Button
variant="ghost"
size="icon-sm"
onClick={() => onRemoveEffect(effect.id)}
title="Remove effect"
>
<Trash2 className="h-4 w-4 text-destructive" />
</Button>
</div>
</div>
))}
</div>
);
}

View File

@@ -0,0 +1,202 @@
'use client';
import * as React from 'react';
import { ChevronDown, ChevronUp, Plus } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { EffectDevice } from './EffectDevice';
import { EffectBrowser } from './EffectBrowser';
import type { Track } from '@/types/track';
import type { EffectType } from '@/lib/audio/effects/chain';
import { cn } from '@/lib/utils/cn';
export interface EffectsPanelProps {
track: Track | null; // Selected track
visible: boolean;
height: number;
onToggleVisible: () => void;
onResizeHeight: (height: number) => void;
onAddEffect?: (effectType: EffectType) => void;
onToggleEffect?: (effectId: string) => void;
onRemoveEffect?: (effectId: string) => void;
onUpdateEffect?: (effectId: string, parameters: any) => void;
onToggleEffectExpanded?: (effectId: string) => void;
}
export function EffectsPanel({
track,
visible,
height,
onToggleVisible,
onResizeHeight,
onAddEffect,
onToggleEffect,
onRemoveEffect,
onUpdateEffect,
onToggleEffectExpanded,
}: EffectsPanelProps) {
const [effectBrowserOpen, setEffectBrowserOpen] = React.useState(false);
const [isResizing, setIsResizing] = React.useState(false);
const resizeStartRef = React.useRef({ y: 0, height: 0 });
// Resize handler
const handleResizeStart = React.useCallback(
(e: React.MouseEvent) => {
e.preventDefault();
setIsResizing(true);
resizeStartRef.current = { y: e.clientY, height };
},
[height]
);
React.useEffect(() => {
if (!isResizing) return;
const handleMouseMove = (e: MouseEvent) => {
const delta = resizeStartRef.current.y - e.clientY;
const newHeight = Math.max(200, Math.min(600, resizeStartRef.current.height + delta));
onResizeHeight(newHeight);
};
const handleMouseUp = () => {
setIsResizing(false);
};
window.addEventListener('mousemove', handleMouseMove);
window.addEventListener('mouseup', handleMouseUp);
return () => {
window.removeEventListener('mousemove', handleMouseMove);
window.removeEventListener('mouseup', handleMouseUp);
};
}, [isResizing, onResizeHeight]);
if (!visible) {
// Collapsed state - just show header bar
return (
<div className="h-8 bg-card border-t border-border flex items-center px-3 gap-2 flex-shrink-0">
<button
onClick={onToggleVisible}
className="flex items-center gap-2 flex-1 hover:text-primary transition-colors text-sm font-medium"
>
<ChevronUp className="h-4 w-4" />
<span>Device View</span>
{track && (
<span className="text-muted-foreground">- {track.name}</span>
)}
</button>
{track && (
<div className="flex items-center gap-1">
<span className="text-xs text-muted-foreground">
{track.effectChain.effects.length} device(s)
</span>
</div>
)}
</div>
);
}
return (
<div
className="bg-card border-t border-border flex flex-col flex-shrink-0 transition-all duration-300 ease-in-out"
style={{ height }}
>
{/* Resize handle */}
<div
className={cn(
'h-1 cursor-ns-resize hover:bg-primary/50 transition-colors group flex items-center justify-center',
isResizing && 'bg-primary/50'
)}
onMouseDown={handleResizeStart}
title="Drag to resize panel"
>
<div className="h-px w-16 bg-border group-hover:bg-primary transition-colors" />
</div>
{/* Header */}
<div className="h-10 flex-shrink-0 border-b border-border flex items-center px-3 gap-2 bg-muted/30">
<button
onClick={onToggleVisible}
className="flex items-center gap-2 flex-1 hover:text-primary transition-colors"
>
<ChevronDown className="h-4 w-4" />
<span className="text-sm font-medium">Device View</span>
{track && (
<>
<span className="text-sm text-muted-foreground">-</span>
<div
className="w-0.5 h-4 rounded-full"
style={{ backgroundColor: track.color }}
/>
<span className="text-sm font-semibold text-foreground">{track.name}</span>
</>
)}
</button>
{track && (
<>
<span className="text-xs text-muted-foreground">
{track.effectChain.effects.length} device(s)
</span>
<Button
variant="ghost"
size="icon-sm"
onClick={() => setEffectBrowserOpen(true)}
title="Add effect"
className="h-7 w-7"
>
<Plus className="h-4 w-4" />
</Button>
</>
)}
</div>
{/* Device Rack */}
<div className="flex-1 overflow-x-auto overflow-y-hidden custom-scrollbar bg-background/50 p-3">
{!track ? (
<div className="h-full flex items-center justify-center text-sm text-muted-foreground">
Select a track to view its devices
</div>
) : track.effectChain.effects.length === 0 ? (
<div className="h-full flex flex-col items-center justify-center text-sm text-muted-foreground gap-2">
<p>No devices on this track</p>
<Button
variant="outline"
size="sm"
onClick={() => setEffectBrowserOpen(true)}
>
<Plus className="h-4 w-4 mr-1" />
Add Device
</Button>
</div>
) : (
<div className="flex h-full gap-3">
{track.effectChain.effects.map((effect) => (
<EffectDevice
key={effect.id}
effect={effect}
onToggleEnabled={() => onToggleEffect?.(effect.id)}
onRemove={() => onRemoveEffect?.(effect.id)}
onUpdateParameters={(params) => onUpdateEffect?.(effect.id, params)}
onToggleExpanded={() => onToggleEffectExpanded?.(effect.id)}
/>
))}
</div>
)}
</div>
{/* Effect Browser Dialog */}
{track && (
<EffectBrowser
open={effectBrowserOpen}
onClose={() => setEffectBrowserOpen(false)}
onSelectEffect={(effectType) => {
if (onAddEffect) {
onAddEffect(effectType);
}
setEffectBrowserOpen(false);
}}
/>
)}
</div>
);
}

View File

@@ -0,0 +1,250 @@
'use client';
import * as React from 'react';
import { Save, FolderOpen, Trash2, Download, Upload } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { Modal } from '@/components/ui/Modal';
import type { EffectChain, EffectPreset } from '@/lib/audio/effects/chain';
import { createPreset, loadPreset } from '@/lib/audio/effects/chain';
export interface PresetManagerProps {
open: boolean;
onClose: () => void;
currentChain: EffectChain;
presets: EffectPreset[];
onSavePreset: (preset: EffectPreset) => void;
onLoadPreset: (preset: EffectPreset) => void;
onDeletePreset: (presetId: string) => void;
onExportPreset?: (preset: EffectPreset) => void;
onImportPreset?: (preset: EffectPreset) => void;
}
export function PresetManager({
open,
onClose,
currentChain,
presets,
onSavePreset,
onLoadPreset,
onDeletePreset,
onExportPreset,
onImportPreset,
}: PresetManagerProps) {
const [presetName, setPresetName] = React.useState('');
const [presetDescription, setPresetDescription] = React.useState('');
const [mode, setMode] = React.useState<'list' | 'create'>('list');
const fileInputRef = React.useRef<HTMLInputElement>(null);
const handleSave = () => {
if (!presetName.trim()) return;
const preset = createPreset(currentChain, presetName.trim(), presetDescription.trim());
onSavePreset(preset);
setPresetName('');
setPresetDescription('');
setMode('list');
};
const handleLoad = (preset: EffectPreset) => {
onLoadPreset(preset);
onClose();
};
const handleExport = (preset: EffectPreset) => {
if (!onExportPreset) return;
const data = JSON.stringify(preset, null, 2);
const blob = new Blob([data], { type: 'application/json' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = `${preset.name.replace(/[^a-z0-9]/gi, '_').toLowerCase()}.json`;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
onExportPreset(preset);
};
const handleImportClick = () => {
fileInputRef.current?.click();
};
const handleFileChange = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file || !onImportPreset) return;
try {
const text = await file.text();
const preset: EffectPreset = JSON.parse(text);
// Validate preset structure
if (!preset.id || !preset.name || !preset.chain) {
throw new Error('Invalid preset format');
}
onImportPreset(preset);
} catch (error) {
console.error('Failed to import preset:', error);
alert('Failed to import preset. Please check the file format.');
}
// Reset input
e.target.value = '';
};
return (
<Modal
open={open}
onClose={onClose}
title="Effect Presets"
className="max-w-2xl"
>
<div className="space-y-4">
{/* Mode Toggle */}
<div className="flex items-center gap-2 border-b border-border pb-3">
<Button
variant={mode === 'list' ? 'secondary' : 'ghost'}
size="sm"
onClick={() => setMode('list')}
>
<FolderOpen className="h-4 w-4 mr-2" />
Load Preset
</Button>
<Button
variant={mode === 'create' ? 'secondary' : 'ghost'}
size="sm"
onClick={() => setMode('create')}
>
<Save className="h-4 w-4 mr-2" />
Save Current
</Button>
{onImportPreset && (
<>
<Button variant="ghost" size="sm" onClick={handleImportClick}>
<Upload className="h-4 w-4 mr-2" />
Import
</Button>
<input
ref={fileInputRef}
type="file"
accept=".json"
onChange={handleFileChange}
className="hidden"
/>
</>
)}
</div>
{/* Create Mode */}
{mode === 'create' && (
<div className="space-y-3">
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Preset Name
</label>
<input
type="text"
value={presetName}
onChange={(e) => setPresetName(e.target.value)}
placeholder="My Awesome Preset"
className="w-full px-3 py-2 bg-background border border-border rounded-md text-sm"
/>
</div>
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Description (optional)
</label>
<textarea
value={presetDescription}
onChange={(e) => setPresetDescription(e.target.value)}
placeholder="What does this preset do?"
rows={3}
className="w-full px-3 py-2 bg-background border border-border rounded-md text-sm resize-none"
/>
</div>
<div className="text-xs text-muted-foreground">
Current chain has {currentChain.effects.length} effect
{currentChain.effects.length !== 1 ? 's' : ''}
</div>
<div className="flex justify-end gap-2">
<Button variant="outline" onClick={() => setMode('list')}>
Cancel
</Button>
<Button
onClick={handleSave}
disabled={!presetName.trim()}
>
<Save className="h-4 w-4 mr-2" />
Save Preset
</Button>
</div>
</div>
)}
{/* List Mode */}
{mode === 'list' && (
<div className="space-y-2 max-h-96 overflow-y-auto custom-scrollbar">
{presets.length === 0 ? (
<div className="text-center py-8 text-sm text-muted-foreground">
No presets saved yet. Create one to get started!
</div>
) : (
presets.map((preset) => (
<div
key={preset.id}
className="flex items-start gap-3 p-3 rounded-lg border border-border bg-card hover:bg-accent/50 transition-colors"
>
<div className="flex-1 min-w-0">
<div className="text-sm font-medium text-foreground">
{preset.name}
</div>
{preset.description && (
<div className="text-xs text-muted-foreground mt-1">
{preset.description}
</div>
)}
<div className="text-xs text-muted-foreground mt-1">
{preset.chain.effects.length} effect
{preset.chain.effects.length !== 1 ? 's' : ''} {' '}
{new Date(preset.createdAt).toLocaleDateString()}
</div>
</div>
<div className="flex items-center gap-1">
<Button
variant="ghost"
size="icon-sm"
onClick={() => handleLoad(preset)}
title="Load preset"
>
<FolderOpen className="h-4 w-4" />
</Button>
{onExportPreset && (
<Button
variant="ghost"
size="icon-sm"
onClick={() => handleExport(preset)}
title="Export preset"
>
<Download className="h-4 w-4" />
</Button>
)}
<Button
variant="ghost"
size="icon-sm"
onClick={() => onDeletePreset(preset.id)}
title="Delete preset"
>
<Trash2 className="h-4 w-4 text-destructive" />
</Button>
</div>
</div>
))
)}
</div>
)}
</div>
</Modal>
);
}

View File

@@ -0,0 +1,676 @@
'use client';
import * as React from 'react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import { cn } from '@/lib/utils/cn';
import type {
DelayParameters,
ReverbParameters,
ChorusParameters,
FlangerParameters,
PhaserParameters,
} from '@/lib/audio/effects/time-based';
export type TimeBasedType = 'delay' | 'reverb' | 'chorus' | 'flanger' | 'phaser';
export type TimeBasedParameters =
| (DelayParameters & { type: 'delay' })
| (ReverbParameters & { type: 'reverb' })
| (ChorusParameters & { type: 'chorus' })
| (FlangerParameters & { type: 'flanger' })
| (PhaserParameters & { type: 'phaser' });
export interface EffectPreset {
name: string;
parameters: Partial<DelayParameters | ReverbParameters | ChorusParameters | FlangerParameters | PhaserParameters>;
}
export interface TimeBasedParameterDialogProps {
open: boolean;
onClose: () => void;
effectType: TimeBasedType;
onApply: (params: TimeBasedParameters) => void;
}
const EFFECT_LABELS: Record<TimeBasedType, string> = {
delay: 'Delay/Echo',
reverb: 'Reverb',
chorus: 'Chorus',
flanger: 'Flanger',
phaser: 'Phaser',
};
const EFFECT_DESCRIPTIONS: Record<TimeBasedType, string> = {
delay: 'Creates echo effects by repeating the audio signal',
reverb: 'Simulates acoustic space and ambience',
chorus: 'Thickens sound by adding modulated copies',
flanger: 'Creates sweeping comb-filter effect',
phaser: 'Creates a phase-shifting swoosh effect',
};
const PRESETS: Record<TimeBasedType, EffectPreset[]> = {
delay: [
{ name: 'Short Slap', parameters: { time: 80, feedback: 0.2, mix: 0.3 } },
{ name: 'Medium Echo', parameters: { time: 250, feedback: 0.4, mix: 0.4 } },
{ name: 'Long Echo', parameters: { time: 500, feedback: 0.5, mix: 0.5 } },
{ name: 'Ping Pong', parameters: { time: 375, feedback: 0.6, mix: 0.4 } },
],
reverb: [
{ name: 'Small Room', parameters: { roomSize: 0.3, damping: 0.5, mix: 0.2 } },
{ name: 'Medium Hall', parameters: { roomSize: 0.6, damping: 0.3, mix: 0.3 } },
{ name: 'Large Hall', parameters: { roomSize: 0.8, damping: 0.2, mix: 0.4 } },
{ name: 'Cathedral', parameters: { roomSize: 1.0, damping: 0.1, mix: 0.5 } },
],
chorus: [
{ name: 'Subtle', parameters: { rate: 0.5, depth: 0.2, delay: 20, mix: 0.3 } },
{ name: 'Classic', parameters: { rate: 1.0, depth: 0.5, delay: 25, mix: 0.5 } },
{ name: 'Deep', parameters: { rate: 1.5, depth: 0.7, delay: 30, mix: 0.6 } },
{ name: 'Lush', parameters: { rate: 0.8, depth: 0.6, delay: 35, mix: 0.7 } },
],
flanger: [
{ name: 'Subtle', parameters: { rate: 0.3, depth: 0.3, feedback: 0.2, delay: 2, mix: 0.4 } },
{ name: 'Classic', parameters: { rate: 0.5, depth: 0.5, feedback: 0.4, delay: 3, mix: 0.5 } },
{ name: 'Jet', parameters: { rate: 0.2, depth: 0.7, feedback: 0.6, delay: 1.5, mix: 0.6 } },
{ name: 'Extreme', parameters: { rate: 1.0, depth: 0.8, feedback: 0.7, delay: 2.5, mix: 0.7 } },
],
phaser: [
{ name: 'Gentle', parameters: { rate: 0.4, depth: 0.3, feedback: 0.2, stages: 4, mix: 0.4 } },
{ name: 'Classic', parameters: { rate: 0.6, depth: 0.5, feedback: 0.4, stages: 6, mix: 0.5 } },
{ name: 'Deep', parameters: { rate: 0.3, depth: 0.7, feedback: 0.5, stages: 8, mix: 0.6 } },
{ name: 'Vintage', parameters: { rate: 0.5, depth: 0.6, feedback: 0.6, stages: 4, mix: 0.7 } },
],
};
export function TimeBasedParameterDialog({
open,
onClose,
effectType,
onApply,
}: TimeBasedParameterDialogProps) {
const [parameters, setParameters] = React.useState<TimeBasedParameters>(() => {
if (effectType === 'delay') {
return { type: 'delay', time: 250, feedback: 0.4, mix: 0.4 };
} else if (effectType === 'reverb') {
return { type: 'reverb', roomSize: 0.6, damping: 0.3, mix: 0.3 };
} else if (effectType === 'chorus') {
return { type: 'chorus', rate: 1.0, depth: 0.5, delay: 25, mix: 0.5 };
} else if (effectType === 'flanger') {
return { type: 'flanger', rate: 0.5, depth: 0.5, feedback: 0.4, delay: 3, mix: 0.5 };
} else {
return { type: 'phaser', rate: 0.6, depth: 0.5, feedback: 0.4, stages: 6, mix: 0.5 };
}
});
const canvasRef = React.useRef<HTMLCanvasElement>(null);
// Get appropriate presets for this effect type
const presets = PRESETS[effectType] || [];
// Update parameters when effect type changes
React.useEffect(() => {
if (effectType === 'delay') {
setParameters({ type: 'delay', time: 250, feedback: 0.4, mix: 0.4 });
} else if (effectType === 'reverb') {
setParameters({ type: 'reverb', roomSize: 0.6, damping: 0.3, mix: 0.3 });
} else if (effectType === 'chorus') {
setParameters({ type: 'chorus', rate: 1.0, depth: 0.5, delay: 25, mix: 0.5 });
} else if (effectType === 'flanger') {
setParameters({ type: 'flanger', rate: 0.5, depth: 0.5, feedback: 0.4, delay: 3, mix: 0.5 });
} else {
setParameters({ type: 'phaser', rate: 0.6, depth: 0.5, feedback: 0.4, stages: 6, mix: 0.5 });
}
}, [effectType]);
// Draw visualization
React.useEffect(() => {
if (!open || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Get actual dimensions
const rect = canvas.getBoundingClientRect();
const dpr = window.devicePixelRatio || 1;
// Ensure canvas has dimensions before drawing
if (rect.width === 0 || rect.height === 0) return;
// Set actual size in memory
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
// Normalize coordinate system
ctx.scale(dpr, dpr);
// Clear canvas
ctx.clearRect(0, 0, canvas.width, canvas.height);
const width = rect.width;
const height = rect.height;
// Clear with background
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('background-color') || '#1a1a1a';
ctx.fillRect(0, 0, width, height);
if (effectType === 'delay') {
// Draw delay echoes
const delayParams = parameters as DelayParameters & { type: 'delay' };
const maxTime = 2000; // ms
const echoCount = 5;
ctx.strokeStyle = 'rgba(128, 128, 128, 0.3)';
ctx.lineWidth = 1;
ctx.setLineDash([2, 2]);
for (let i = 0; i <= 4; i++) {
const x = (i / 4) * width;
ctx.beginPath();
ctx.moveTo(x, 0);
ctx.lineTo(x, height);
ctx.stroke();
}
ctx.setLineDash([]);
let gain = 1.0;
for (let i = 0; i < echoCount; i++) {
const x = (i * delayParams.time / maxTime) * width;
const barHeight = height * gain * 0.8;
const y = (height - barHeight) / 2;
ctx.fillStyle = `rgba(59, 130, 246, ${gain})`;
ctx.fillRect(x - 3, y, 6, barHeight);
gain *= delayParams.feedback;
if (gain < 0.01) break;
}
} else if (effectType === 'reverb') {
// Draw reverb decay
const reverbParams = parameters as ReverbParameters & { type: 'reverb' };
const decayTime = reverbParams.roomSize * 3000; // ms
ctx.strokeStyle = '#3b82f6';
ctx.lineWidth = 2;
ctx.beginPath();
for (let x = 0; x < width; x++) {
const time = (x / width) * 3000;
const decay = Math.exp(-time / (decayTime * (1 - reverbParams.damping * 0.5)));
const y = height / 2 + (height / 2 - 20) * (1 - decay);
if (x === 0) ctx.moveTo(x, y);
else ctx.lineTo(x, y);
}
ctx.stroke();
// Draw reference line
ctx.strokeStyle = 'rgba(128, 128, 128, 0.3)';
ctx.lineWidth = 1;
ctx.setLineDash([5, 5]);
ctx.beginPath();
ctx.moveTo(0, height / 2);
ctx.lineTo(width, height / 2);
ctx.stroke();
ctx.setLineDash([]);
} else {
// Draw LFO waveform for chorus, flanger, phaser
let rate = 1.0;
let depth = 0.5;
if (effectType === 'chorus') {
const chorusParams = parameters as ChorusParameters & { type: 'chorus' };
rate = chorusParams.rate;
depth = chorusParams.depth;
} else if (effectType === 'flanger') {
const flangerParams = parameters as FlangerParameters & { type: 'flanger' };
rate = flangerParams.rate;
depth = flangerParams.depth;
} else if (effectType === 'phaser') {
const phaserParams = parameters as PhaserParameters & { type: 'phaser' };
rate = phaserParams.rate;
depth = phaserParams.depth;
}
ctx.strokeStyle = '#3b82f6';
ctx.lineWidth = 2;
ctx.beginPath();
const cycles = rate * 2; // Show 2 seconds worth
for (let x = 0; x < width; x++) {
const phase = (x / width) * cycles * 2 * Math.PI;
const lfo = Math.sin(phase);
const y = height / 2 - (lfo * depth * height * 0.4);
if (x === 0) ctx.moveTo(x, y);
else ctx.lineTo(x, y);
}
ctx.stroke();
// Draw center line
ctx.strokeStyle = 'rgba(128, 128, 128, 0.3)';
ctx.lineWidth = 1;
ctx.setLineDash([5, 5]);
ctx.beginPath();
ctx.moveTo(0, height / 2);
ctx.lineTo(width, height / 2);
ctx.stroke();
ctx.setLineDash([]);
}
}, [parameters, effectType, open]);
const handleApply = () => {
onApply(parameters);
onClose();
};
const handlePresetClick = (preset: EffectPreset) => {
setParameters((prev) => ({
...prev,
...preset.parameters,
}));
};
return (
<Modal
open={open}
onClose={onClose}
title={EFFECT_LABELS[effectType] || 'Time-Based Effect'}
description={EFFECT_DESCRIPTIONS[effectType]}
size="lg"
footer={
<>
<Button variant="outline" onClick={onClose}>
Cancel
</Button>
<Button onClick={handleApply}>
Apply Effect
</Button>
</>
}
>
<div className="space-y-4">
{/* Visualization */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
{effectType === 'delay' && 'Echo Pattern'}
{effectType === 'reverb' && 'Reverb Decay'}
{(effectType === 'chorus' || effectType === 'flanger' || effectType === 'phaser') && 'Modulation (LFO)'}
</label>
<canvas
ref={canvasRef}
className="w-full h-32 border border-border rounded bg-background"
/>
</div>
{/* Presets */}
{presets.length > 0 && (
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Presets
</label>
<div className="grid grid-cols-2 gap-2">
{presets.map((preset) => (
<Button
key={preset.name}
variant="outline"
size="sm"
onClick={() => handlePresetClick(preset)}
className="justify-start"
>
{preset.name}
</Button>
))}
</div>
</div>
)}
{/* Effect-specific parameters */}
{effectType === 'delay' && (
<>
{/* Delay Time */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Delay Time</span>
<span className="text-muted-foreground font-mono">
{((parameters as DelayParameters).time ?? 250).toFixed(0)} ms
</span>
</label>
<Slider
value={[((parameters as DelayParameters).time ?? 250)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, time: value }))
}
min={10}
max={2000}
step={10}
className="w-full"
/>
</div>
{/* Feedback */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Feedback</span>
<span className="text-muted-foreground font-mono">
{(((parameters as DelayParameters).feedback ?? 0.4) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as DelayParameters).feedback ?? 0.4)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, feedback: value }))
}
min={0}
max={0.95}
step={0.01}
className="w-full"
/>
</div>
</>
)}
{effectType === 'reverb' && (
<>
{/* Room Size */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Room Size</span>
<span className="text-muted-foreground font-mono">
{(((parameters as ReverbParameters).roomSize ?? 0.6) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as ReverbParameters).roomSize ?? 0.6)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, roomSize: value }))
}
min={0.1}
max={1}
step={0.01}
className="w-full"
/>
</div>
{/* Damping */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Damping</span>
<span className="text-muted-foreground font-mono">
{(((parameters as ReverbParameters).damping ?? 0.3) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as ReverbParameters).damping ?? 0.3)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, damping: value }))
}
min={0}
max={1}
step={0.01}
className="w-full"
/>
</div>
</>
)}
{effectType === 'chorus' && (
<>
{/* Rate */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Rate</span>
<span className="text-muted-foreground font-mono">
{((parameters as ChorusParameters).rate ?? 1.0).toFixed(2)} Hz
</span>
</label>
<Slider
value={[((parameters as ChorusParameters).rate ?? 1.0)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, rate: value }))
}
min={0.1}
max={5}
step={0.1}
className="w-full"
/>
</div>
{/* Depth */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Depth</span>
<span className="text-muted-foreground font-mono">
{(((parameters as ChorusParameters).depth ?? 0.5) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as ChorusParameters).depth ?? 0.5)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, depth: value }))
}
min={0}
max={1}
step={0.01}
className="w-full"
/>
</div>
{/* Base Delay */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Base Delay</span>
<span className="text-muted-foreground font-mono">
{((parameters as ChorusParameters).delay ?? 25).toFixed(1)} ms
</span>
</label>
<Slider
value={[((parameters as ChorusParameters).delay ?? 25)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, delay: value }))
}
min={5}
max={50}
step={0.5}
className="w-full"
/>
</div>
</>
)}
{effectType === 'flanger' && (
<>
{/* Rate */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Rate</span>
<span className="text-muted-foreground font-mono">
{((parameters as FlangerParameters).rate ?? 0.5).toFixed(2)} Hz
</span>
</label>
<Slider
value={[((parameters as FlangerParameters).rate ?? 0.5)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, rate: value }))
}
min={0.1}
max={5}
step={0.1}
className="w-full"
/>
</div>
{/* Depth */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Depth</span>
<span className="text-muted-foreground font-mono">
{(((parameters as FlangerParameters).depth ?? 0.5) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as FlangerParameters).depth ?? 0.5)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, depth: value }))
}
min={0}
max={1}
step={0.01}
className="w-full"
/>
</div>
{/* Feedback */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Feedback</span>
<span className="text-muted-foreground font-mono">
{(((parameters as FlangerParameters).feedback ?? 0.4) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as FlangerParameters).feedback ?? 0.4)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, feedback: value }))
}
min={0}
max={0.95}
step={0.01}
className="w-full"
/>
</div>
{/* Base Delay */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Base Delay</span>
<span className="text-muted-foreground font-mono">
{((parameters as FlangerParameters).delay ?? 3).toFixed(1)} ms
</span>
</label>
<Slider
value={[((parameters as FlangerParameters).delay ?? 3)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, delay: value }))
}
min={0.5}
max={10}
step={0.1}
className="w-full"
/>
</div>
</>
)}
{effectType === 'phaser' && (
<>
{/* Rate */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Rate</span>
<span className="text-muted-foreground font-mono">
{((parameters as PhaserParameters).rate ?? 0.6).toFixed(2)} Hz
</span>
</label>
<Slider
value={[((parameters as PhaserParameters).rate ?? 0.6)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, rate: value }))
}
min={0.1}
max={5}
step={0.1}
className="w-full"
/>
</div>
{/* Depth */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Depth</span>
<span className="text-muted-foreground font-mono">
{(((parameters as PhaserParameters).depth ?? 0.5) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as PhaserParameters).depth ?? 0.5)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, depth: value }))
}
min={0}
max={1}
step={0.01}
className="w-full"
/>
</div>
{/* Feedback */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Feedback</span>
<span className="text-muted-foreground font-mono">
{(((parameters as PhaserParameters).feedback ?? 0.4) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[((parameters as PhaserParameters).feedback ?? 0.4)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, feedback: value }))
}
min={0}
max={0.95}
step={0.01}
className="w-full"
/>
</div>
{/* Stages */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Stages</span>
<span className="text-muted-foreground font-mono">
{((parameters as PhaserParameters).stages ?? 6).toFixed(0)}
</span>
</label>
<Slider
value={[((parameters as PhaserParameters).stages ?? 6)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, stages: Math.floor(value) }))
}
min={2}
max={12}
step={1}
className="w-full"
/>
</div>
</>
)}
{/* Mix (common to all) */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground flex justify-between">
<span>Mix (Dry/Wet)</span>
<span className="text-muted-foreground font-mono">
{((parameters.mix ?? 0.5) * 100).toFixed(0)}%
</span>
</label>
<Slider
value={[(parameters.mix ?? 0.5)]}
onValueChange={([value]) =>
setParameters((prev) => ({ ...prev, mix: value }))
}
min={0}
max={1}
step={0.01}
className="w-full"
/>
<div className="flex justify-between text-xs text-muted-foreground">
<span>0% (Dry)</span>
<span>100% (Wet)</span>
</div>
</div>
</div>
</Modal>
);
}

View File

@@ -0,0 +1,303 @@
'use client';
import * as React from 'react';
import {
ChevronLeft,
ChevronRight,
Upload,
Plus,
Trash2,
Link2,
FolderOpen,
Music2,
} from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import type { Track } from '@/types/track';
import type { EffectChain, EffectPreset } from '@/lib/audio/effects/chain';
import { EffectRack } from '@/components/effects/EffectRack';
import { PresetManager } from '@/components/effects/PresetManager';
export interface SidePanelProps {
tracks: Track[];
selectedTrackId: string | null;
onSelectTrack: (trackId: string | null) => void;
onAddTrack: () => void;
onImportTracks: () => void;
onUpdateTrack: (trackId: string, updates: Partial<Track>) => void;
onRemoveTrack: (trackId: string) => void;
onClearTracks: () => void;
// Track effect chain (for selected track)
trackEffectChain: EffectChain | null;
onToggleTrackEffect: (effectId: string) => void;
onRemoveTrackEffect: (effectId: string) => void;
onReorderTrackEffects: (fromIndex: number, toIndex: number) => void;
onClearTrackChain: () => void;
// Master effect chain
masterEffectChain: EffectChain;
masterEffectPresets: EffectPreset[];
onToggleMasterEffect: (effectId: string) => void;
onRemoveMasterEffect: (effectId: string) => void;
onReorderMasterEffects: (fromIndex: number, toIndex: number) => void;
onSaveMasterPreset: (preset: EffectPreset) => void;
onLoadMasterPreset: (preset: EffectPreset) => void;
onDeleteMasterPreset: (presetId: string) => void;
onClearMasterChain: () => void;
className?: string;
}
export function SidePanel({
tracks,
selectedTrackId,
onSelectTrack,
onAddTrack,
onImportTracks,
onUpdateTrack,
onRemoveTrack,
onClearTracks,
trackEffectChain,
onToggleTrackEffect,
onRemoveTrackEffect,
onReorderTrackEffects,
onClearTrackChain,
masterEffectChain,
masterEffectPresets,
onToggleMasterEffect,
onRemoveMasterEffect,
onReorderMasterEffects,
onSaveMasterPreset,
onLoadMasterPreset,
onDeleteMasterPreset,
onClearMasterChain,
className,
}: SidePanelProps) {
const [isCollapsed, setIsCollapsed] = React.useState(false);
const [activeTab, setActiveTab] = React.useState<'tracks' | 'master'>('tracks');
const [presetDialogOpen, setPresetDialogOpen] = React.useState(false);
const selectedTrack = tracks.find((t) => t.id === selectedTrackId);
if (isCollapsed) {
return (
<div
className={cn(
'w-12 bg-card border-r border-border flex flex-col items-center py-2',
className
)}
>
<Button
variant="ghost"
size="icon-sm"
onClick={() => setIsCollapsed(false)}
title="Expand Side Panel"
>
<ChevronRight className="h-4 w-4" />
</Button>
</div>
);
}
return (
<div className={cn('w-80 bg-card border-r border-border flex flex-col', className)}>
{/* Header */}
<div className="flex items-center justify-between p-2 border-b border-border">
<div className="flex items-center gap-1">
<Button
variant={activeTab === 'tracks' ? 'secondary' : 'ghost'}
size="sm"
onClick={() => setActiveTab('tracks')}
title="Tracks"
>
<Music2 className="h-4 w-4 mr-1.5" />
Tracks
</Button>
<Button
variant={activeTab === 'master' ? 'secondary' : 'ghost'}
size="sm"
onClick={() => setActiveTab('master')}
title="Master"
>
<Link2 className="h-4 w-4 mr-1.5 text-primary" />
Master
</Button>
</div>
<Button
variant="ghost"
size="icon-sm"
onClick={() => setIsCollapsed(true)}
title="Collapse Side Panel"
>
<ChevronLeft className="h-4 w-4" />
</Button>
</div>
{/* Content */}
<div className="flex-1 overflow-y-auto p-3 space-y-3 custom-scrollbar">
{activeTab === 'tracks' && (
<>
{/* Track Actions */}
<div className="space-y-2">
<h3 className="text-xs font-semibold text-muted-foreground uppercase">
Track Management
</h3>
<div className="flex gap-2">
<Button
variant="outline"
size="sm"
onClick={onAddTrack}
className="flex-1"
>
<Plus className="h-3.5 w-3.5 mr-1.5" />
Add Track
</Button>
<Button
variant="outline"
size="sm"
onClick={onImportTracks}
className="flex-1"
>
<Upload className="h-3.5 w-3.5 mr-1.5" />
Import
</Button>
</div>
{tracks.length > 0 && (
<Button
variant="outline"
size="sm"
onClick={onClearTracks}
className="w-full"
>
<Trash2 className="h-3.5 w-3.5 mr-1.5 text-destructive" />
Clear All Tracks
</Button>
)}
</div>
{/* Track List Summary */}
{tracks.length > 0 ? (
<div className="space-y-2">
<div className="flex items-center justify-between">
<h3 className="text-xs font-semibold text-muted-foreground uppercase">
Tracks ({tracks.length})
</h3>
{selectedTrack && (
<span className="text-xs text-primary">
{String(selectedTrack.name || 'Untitled Track')} selected
</span>
)}
</div>
<div className="text-xs text-muted-foreground">
<p>
{selectedTrack
? 'Track controls are on the left of each track. Effects for the selected track are shown below.'
: 'Click a track\'s waveform to select it and edit its effects below.'}
</p>
</div>
</div>
) : (
<div className="text-center py-8">
<Music2 className="h-12 w-12 mx-auto text-muted-foreground/50 mb-2" />
<p className="text-sm text-muted-foreground mb-4">
No tracks yet. Add or import audio files to get started.
</p>
</div>
)}
{/* Selected Track Effects */}
{selectedTrack && (
<div className="space-y-2 pt-3 border-t border-border">
<div className="flex items-center justify-between">
<h3 className="text-xs font-semibold text-muted-foreground uppercase">
Track Effects
</h3>
{trackEffectChain && trackEffectChain.effects.length > 0 && (
<Button
variant="ghost"
size="icon-sm"
onClick={onClearTrackChain}
title="Clear all effects"
>
<Trash2 className="h-4 w-4 text-destructive" />
</Button>
)}
</div>
<EffectRack
chain={trackEffectChain!}
onToggleEffect={onToggleTrackEffect}
onRemoveEffect={onRemoveTrackEffect}
onReorderEffects={onReorderTrackEffects}
/>
</div>
)}
</>
)}
{activeTab === 'master' && (
<>
{/* Master Channel Info */}
<div className="space-y-2">
<h3 className="text-xs font-semibold text-muted-foreground uppercase">
Master Channel
</h3>
<div className="text-xs text-muted-foreground">
<p>
Master effects are applied to the final mix of all tracks.
</p>
</div>
</div>
{/* Master Effects */}
<div className="space-y-2 pt-3 border-t border-border">
<div className="flex items-center justify-between">
<h3 className="text-xs font-semibold text-muted-foreground uppercase">
Master Effects
</h3>
<div className="flex gap-1">
<Button
variant="ghost"
size="icon-sm"
onClick={() => setPresetDialogOpen(true)}
title="Manage presets"
>
<FolderOpen className="h-4 w-4" />
</Button>
{masterEffectChain.effects.length > 0 && (
<Button
variant="ghost"
size="icon-sm"
onClick={onClearMasterChain}
title="Clear all effects"
>
<Trash2 className="h-4 w-4 text-destructive" />
</Button>
)}
</div>
</div>
<EffectRack
chain={masterEffectChain}
onToggleEffect={onToggleMasterEffect}
onRemoveEffect={onRemoveMasterEffect}
onReorderEffects={onReorderMasterEffects}
/>
<PresetManager
open={presetDialogOpen}
onClose={() => setPresetDialogOpen(false)}
currentChain={masterEffectChain}
presets={masterEffectPresets}
onSavePreset={onSaveMasterPreset}
onLoadPreset={onLoadMasterPreset}
onDeletePreset={onDeleteMasterPreset}
onExportPreset={() => {}}
onImportPreset={(preset) => onSaveMasterPreset(preset)}
/>
</div>
</>
)}
</div>
</div>
);
}

View File

@@ -0,0 +1,237 @@
'use client';
import * as React from 'react';
import {
Play,
Pause,
Square,
SkipBack,
Scissors,
Copy,
Clipboard,
Trash2,
CropIcon,
Undo2,
Redo2,
ZoomIn,
ZoomOut,
Maximize2,
} from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
export interface ToolbarProps {
// Playback
isPlaying: boolean;
isPaused: boolean;
onPlay: () => void;
onPause: () => void;
onStop: () => void;
// Edit
hasSelection: boolean;
hasClipboard: boolean;
onCut: () => void;
onCopy: () => void;
onPaste: () => void;
onDelete: () => void;
onTrim: () => void;
// History
canUndo: boolean;
canRedo: boolean;
onUndo: () => void;
onRedo: () => void;
// Zoom
onZoomIn: () => void;
onZoomOut: () => void;
onFitToView: () => void;
disabled?: boolean;
className?: string;
}
export function Toolbar({
isPlaying,
isPaused,
onPlay,
onPause,
onStop,
hasSelection,
hasClipboard,
onCut,
onCopy,
onPaste,
onDelete,
onTrim,
canUndo,
canRedo,
onUndo,
onRedo,
onZoomIn,
onZoomOut,
onFitToView,
disabled = false,
className,
}: ToolbarProps) {
const handlePlayPause = () => {
if (isPlaying) {
onPause();
} else {
onPlay();
}
};
return (
<div
className={cn(
'flex items-center gap-1 px-2 py-1.5 bg-card border-b border-border',
className
)}
>
{/* Transport Controls */}
<div className="flex items-center gap-0.5 pr-2 border-r border-border">
<Button
variant="ghost"
size="icon-sm"
onClick={onStop}
disabled={disabled || (!isPlaying && !isPaused)}
title="Stop"
>
<SkipBack className="h-4 w-4" />
</Button>
<Button
variant={isPlaying ? 'default' : 'ghost'}
size="icon-sm"
onClick={handlePlayPause}
disabled={disabled}
title={isPlaying ? 'Pause (Space)' : 'Play (Space)'}
>
{isPlaying ? (
<Pause className="h-4 w-4" />
) : (
<Play className="h-4 w-4 ml-0.5" />
)}
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onStop}
disabled={disabled || (!isPlaying && !isPaused)}
title="Stop"
>
<Square className="h-4 w-4" />
</Button>
</div>
{/* Edit Tools */}
<div className="flex items-center gap-0.5 px-2 border-r border-border">
<Button
variant="ghost"
size="icon-sm"
onClick={onCut}
disabled={!hasSelection}
title="Cut (Ctrl+X)"
>
<Scissors className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onCopy}
disabled={!hasSelection}
title="Copy (Ctrl+C)"
>
<Copy className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onPaste}
disabled={!hasClipboard}
title="Paste (Ctrl+V)"
>
<Clipboard className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onDelete}
disabled={!hasSelection}
title="Delete (Del)"
>
<Trash2 className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onTrim}
disabled={!hasSelection}
title="Trim to Selection"
>
<CropIcon className="h-4 w-4" />
</Button>
</div>
{/* History */}
<div className="flex items-center gap-0.5 px-2 border-r border-border">
<Button
variant="ghost"
size="icon-sm"
onClick={onUndo}
disabled={!canUndo}
title="Undo (Ctrl+Z)"
>
<Undo2 className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onRedo}
disabled={!canRedo}
title="Redo (Ctrl+Y)"
>
<Redo2 className="h-4 w-4" />
</Button>
</div>
{/* Zoom Controls */}
<div className="flex items-center gap-0.5 px-2">
<Button
variant="ghost"
size="icon-sm"
onClick={onZoomOut}
title="Zoom Out"
>
<ZoomOut className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onZoomIn}
title="Zoom In"
>
<ZoomIn className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={onFitToView}
title="Fit to View"
>
<Maximize2 className="h-4 w-4" />
</Button>
</div>
</div>
);
}

View File

@@ -0,0 +1,188 @@
'use client';
import * as React from 'react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import type { Marker, MarkerType } from '@/types/marker';
export interface MarkerDialogProps {
open: boolean;
onClose: () => void;
onSave: (marker: Partial<Marker>) => void;
marker?: Marker; // If editing existing marker
defaultTime?: number; // Default time for new markers
defaultType?: MarkerType;
}
const MARKER_COLORS = [
'#ef4444', // red
'#f97316', // orange
'#eab308', // yellow
'#22c55e', // green
'#3b82f6', // blue
'#a855f7', // purple
'#ec4899', // pink
];
export function MarkerDialog({
open,
onClose,
onSave,
marker,
defaultTime = 0,
defaultType = 'point',
}: MarkerDialogProps) {
const [name, setName] = React.useState(marker?.name || '');
const [type, setType] = React.useState<MarkerType>(marker?.type || defaultType);
const [time, setTime] = React.useState(marker?.time || defaultTime);
const [endTime, setEndTime] = React.useState(marker?.endTime || defaultTime + 1);
const [color, setColor] = React.useState(marker?.color || MARKER_COLORS[0]);
const [description, setDescription] = React.useState(marker?.description || '');
// Reset form when marker changes or dialog opens
React.useEffect(() => {
if (open) {
setName(marker?.name || '');
setType(marker?.type || defaultType);
setTime(marker?.time || defaultTime);
setEndTime(marker?.endTime || defaultTime + 1);
setColor(marker?.color || MARKER_COLORS[0]);
setDescription(marker?.description || '');
}
}, [open, marker, defaultTime, defaultType]);
const handleSave = () => {
const markerData: Partial<Marker> = {
...(marker?.id && { id: marker.id }),
name: name || 'Untitled Marker',
type,
time,
...(type === 'region' && { endTime }),
color,
description,
};
onSave(markerData);
onClose();
};
return (
<Modal
open={open}
onClose={onClose}
title={marker ? 'Edit Marker' : 'Add Marker'}
description={marker ? 'Edit marker properties' : 'Add a new marker or region to the timeline'}
size="md"
footer={
<>
<Button variant="outline" onClick={onClose}>
Cancel
</Button>
<Button onClick={handleSave}>{marker ? 'Save' : 'Add'}</Button>
</>
}
>
<div className="space-y-4">
{/* Name */}
<div className="space-y-2">
<label htmlFor="name" className="text-sm font-medium text-foreground">
Name
</label>
<input
id="name"
type="text"
value={name}
onChange={(e) => setName(e.target.value)}
placeholder="Marker name"
className="flex h-10 w-full rounded-md border border-border bg-background px-3 py-2 text-sm text-foreground placeholder:text-muted-foreground focus:outline-none focus:ring-2 focus:ring-primary focus:ring-offset-2 focus:ring-offset-background"
/>
</div>
{/* Type */}
<div className="space-y-2">
<label htmlFor="type" className="text-sm font-medium text-foreground">
Type
</label>
<select
id="type"
value={type}
onChange={(e) => setType(e.target.value as MarkerType)}
className="flex h-10 w-full rounded-md border border-border bg-background px-3 py-2 text-sm text-foreground focus:outline-none focus:ring-2 focus:ring-primary focus:ring-offset-2 focus:ring-offset-background"
>
<option value="point">Point Marker</option>
<option value="region">Region</option>
</select>
</div>
{/* Time */}
<div className="space-y-2">
<label htmlFor="time" className="text-sm font-medium text-foreground">
{type === 'region' ? 'Start Time' : 'Time'} (seconds)
</label>
<input
id="time"
type="number"
step="0.1"
min="0"
value={time}
onChange={(e) => setTime(parseFloat(e.target.value))}
className="flex h-10 w-full rounded-md border border-border bg-background px-3 py-2 text-sm text-foreground focus:outline-none focus:ring-2 focus:ring-primary focus:ring-offset-2 focus:ring-offset-background"
/>
</div>
{/* End Time (for regions) */}
{type === 'region' && (
<div className="space-y-2">
<label htmlFor="endTime" className="text-sm font-medium text-foreground">
End Time (seconds)
</label>
<input
id="endTime"
type="number"
step="0.1"
min={time}
value={endTime}
onChange={(e) => setEndTime(parseFloat(e.target.value))}
className="flex h-10 w-full rounded-md border border-border bg-background px-3 py-2 text-sm text-foreground focus:outline-none focus:ring-2 focus:ring-primary focus:ring-offset-2 focus:ring-offset-background"
/>
</div>
)}
{/* Color */}
<div className="space-y-2">
<label className="text-sm font-medium text-foreground">
Color
</label>
<div className="flex gap-2">
{MARKER_COLORS.map((c) => (
<button
key={c}
type="button"
className="w-8 h-8 rounded border-2 transition-all hover:scale-110"
style={{
backgroundColor: c,
borderColor: color === c ? 'white' : 'transparent',
}}
onClick={() => setColor(c)}
/>
))}
</div>
</div>
{/* Description */}
<div className="space-y-2">
<label htmlFor="description" className="text-sm font-medium text-foreground">
Description (optional)
</label>
<input
id="description"
type="text"
value={description}
onChange={(e) => setDescription(e.target.value)}
placeholder="Optional description"
className="flex h-10 w-full rounded-md border border-border bg-background px-3 py-2 text-sm text-foreground placeholder:text-muted-foreground focus:outline-none focus:ring-2 focus:ring-primary focus:ring-offset-2 focus:ring-offset-background"
/>
</div>
</div>
</Modal>
);
}

View File

@@ -0,0 +1,216 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
import type { Marker } from '@/types/marker';
import { Flag, Edit2, Trash2 } from 'lucide-react';
import { Button } from '@/components/ui/Button';
export interface MarkerTimelineProps {
markers: Marker[];
duration: number;
currentTime: number;
onMarkerClick?: (marker: Marker) => void;
onMarkerEdit?: (marker: Marker) => void;
onMarkerDelete?: (markerId: string) => void;
onSeek?: (time: number) => void;
className?: string;
}
export function MarkerTimeline({
markers,
duration,
currentTime,
onMarkerClick,
onMarkerEdit,
onMarkerDelete,
onSeek,
className,
}: MarkerTimelineProps) {
const containerRef = React.useRef<HTMLDivElement>(null);
const [hoveredMarkerId, setHoveredMarkerId] = React.useState<string | null>(null);
const timeToX = React.useCallback(
(time: number): number => {
if (!containerRef.current) return 0;
const width = containerRef.current.clientWidth;
return (time / duration) * width;
},
[duration]
);
return (
<div
ref={containerRef}
className={cn(
'relative w-full h-8 bg-muted/30 border-b border-border',
className
)}
>
{/* Markers */}
{markers.map((marker) => {
const x = timeToX(marker.time);
const isHovered = hoveredMarkerId === marker.id;
if (marker.type === 'point') {
return (
<div
key={marker.id}
className="absolute top-0 bottom-0 group cursor-pointer"
style={{ left: `${x}px` }}
onMouseEnter={() => setHoveredMarkerId(marker.id)}
onMouseLeave={() => setHoveredMarkerId(null)}
onClick={() => {
onMarkerClick?.(marker);
onSeek?.(marker.time);
}}
>
{/* Marker line */}
<div
className={cn(
'absolute top-0 bottom-0 w-0.5 transition-colors',
isHovered ? 'bg-primary' : 'bg-primary/60'
)}
style={{ backgroundColor: marker.color }}
/>
{/* Marker flag */}
<Flag
className={cn(
'absolute top-0.5 -left-2 h-4 w-4 transition-colors',
isHovered ? 'text-primary' : 'text-primary/60'
)}
style={{ color: marker.color }}
/>
{/* Hover tooltip with actions */}
{isHovered && (
<div className="absolute top-full left-0 mt-1 z-10 bg-popover border border-border rounded shadow-lg p-2 min-w-[200px]">
<div className="text-xs font-medium mb-1">{marker.name}</div>
{marker.description && (
<div className="text-xs text-muted-foreground mb-2">{marker.description}</div>
)}
<div className="flex gap-1">
{onMarkerEdit && (
<Button
variant="ghost"
size="icon-sm"
onClick={(e) => {
e.stopPropagation();
onMarkerEdit(marker);
}}
title="Edit marker"
className="h-6 w-6"
>
<Edit2 className="h-3 w-3" />
</Button>
)}
{onMarkerDelete && (
<Button
variant="ghost"
size="icon-sm"
onClick={(e) => {
e.stopPropagation();
onMarkerDelete(marker.id);
}}
title="Delete marker"
className="h-6 w-6 text-destructive hover:text-destructive"
>
<Trash2 className="h-3 w-3" />
</Button>
)}
</div>
</div>
)}
</div>
);
} else {
// Region marker
const endX = timeToX(marker.endTime || marker.time);
const width = endX - x;
return (
<div
key={marker.id}
className="absolute top-0 bottom-0 group cursor-pointer"
style={{ left: `${x}px`, width: `${width}px` }}
onMouseEnter={() => setHoveredMarkerId(marker.id)}
onMouseLeave={() => setHoveredMarkerId(null)}
onClick={() => {
onMarkerClick?.(marker);
onSeek?.(marker.time);
}}
>
{/* Region background */}
<div
className={cn(
'absolute inset-0 transition-opacity',
isHovered ? 'opacity-30' : 'opacity-20'
)}
style={{ backgroundColor: marker.color || 'var(--color-primary)' }}
/>
{/* Region borders */}
<div
className="absolute top-0 bottom-0 left-0 w-0.5"
style={{ backgroundColor: marker.color || 'var(--color-primary)' }}
/>
<div
className="absolute top-0 bottom-0 right-0 w-0.5"
style={{ backgroundColor: marker.color || 'var(--color-primary)' }}
/>
{/* Region label */}
<div
className="absolute top-0.5 left-1 text-[10px] font-medium truncate pr-1"
style={{ color: marker.color || 'var(--color-primary)', maxWidth: `${width - 8}px` }}
>
{marker.name}
</div>
{/* Hover tooltip with actions */}
{isHovered && (
<div className="absolute top-full left-0 mt-1 z-10 bg-popover border border-border rounded shadow-lg p-2 min-w-[200px]">
<div className="text-xs font-medium mb-1">{marker.name}</div>
{marker.description && (
<div className="text-xs text-muted-foreground mb-2">{marker.description}</div>
)}
<div className="flex gap-1">
{onMarkerEdit && (
<Button
variant="ghost"
size="icon-sm"
onClick={(e) => {
e.stopPropagation();
onMarkerEdit(marker);
}}
title="Edit marker"
className="h-6 w-6"
>
<Edit2 className="h-3 w-3" />
</Button>
)}
{onMarkerDelete && (
<Button
variant="ghost"
size="icon-sm"
onClick={(e) => {
e.stopPropagation();
onMarkerDelete(marker.id);
}}
title="Delete marker"
className="h-6 w-6 text-destructive hover:text-destructive"
>
<Trash2 className="h-3 w-3" />
</Button>
)}
</div>
</div>
)}
</div>
);
}
})}
</div>
);
}

View File

@@ -0,0 +1,86 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface InputLevelMeterProps {
level: number; // 0.0 to 1.0 (normalized dB scale)
orientation?: 'horizontal' | 'vertical';
className?: string;
}
export function InputLevelMeter({
level,
orientation = 'horizontal',
className,
}: InputLevelMeterProps) {
// Clamp level between 0 and 1
const clampedLevel = Math.max(0, Math.min(1, level));
const isHorizontal = orientation === 'horizontal';
// Professional audio meter gradient:
// Green (0-70% = -60dB to -18dB)
// Yellow (70-90% = -18dB to -6dB)
// Red (90-100% = -6dB to 0dB)
const gradient = isHorizontal
? 'linear-gradient(to right, rgb(34, 197, 94) 0%, rgb(34, 197, 94) 70%, rgb(234, 179, 8) 85%, rgb(239, 68, 68) 100%)'
: 'linear-gradient(to top, rgb(34, 197, 94) 0%, rgb(34, 197, 94) 70%, rgb(234, 179, 8) 85%, rgb(239, 68, 68) 100%)';
return (
<div
className={cn(
'relative bg-muted rounded-sm overflow-hidden',
isHorizontal ? 'h-4 w-full' : 'w-4 h-full',
className
)}
>
{/* Level bar with gradient */}
<div
className={cn(
'absolute transition-all duration-75 ease-out',
isHorizontal ? 'h-full left-0 top-0' : 'w-full bottom-0 left-0'
)}
style={{
[isHorizontal ? 'width' : 'height']: `${clampedLevel * 100}%`,
background: gradient,
}}
/>
{/* Clip indicator (at 90%) */}
{clampedLevel > 0.9 && (
<div
className={cn(
'absolute bg-red-600 animate-pulse',
isHorizontal
? 'right-0 top-0 w-1 h-full'
: 'bottom-0 left-0 h-1 w-full'
)}
/>
)}
{/* Tick marks */}
<div
className={cn(
'absolute inset-0 flex',
isHorizontal ? 'flex-row' : 'flex-col-reverse'
)}
>
{[0.25, 0.5, 0.75].map((tick) => (
<div
key={tick}
className={cn(
'absolute bg-background/30',
isHorizontal
? 'h-full w-px top-0'
: 'w-full h-px left-0'
)}
style={{
[isHorizontal ? 'left' : 'bottom']: `${tick * 100}%`,
}}
/>
))}
</div>
</div>
);
}

View File

@@ -0,0 +1,106 @@
'use client';
import * as React from 'react';
import { Volume2, Radio } from 'lucide-react';
import { Slider } from '@/components/ui/Slider';
import { cn } from '@/lib/utils/cn';
import type { RecordingSettings as RecordingSettingsType } from '@/lib/hooks/useRecording';
export interface RecordingSettingsProps {
settings: RecordingSettingsType;
onInputGainChange: (gain: number) => void;
onRecordMonoChange: (mono: boolean) => void;
onSampleRateChange: (sampleRate: number) => void;
className?: string;
}
const SAMPLE_RATES = [44100, 48000, 96000];
export function RecordingSettings({
settings,
onInputGainChange,
onRecordMonoChange,
onSampleRateChange,
className,
}: RecordingSettingsProps) {
return (
<div className={cn('space-y-2 p-3 bg-muted/50 rounded border border-border', className)}>
<div className="text-xs font-medium text-muted-foreground mb-2">Recording Settings</div>
{/* Input Gain */}
<div className="flex items-center gap-2">
<label className="text-xs text-muted-foreground flex items-center gap-1 w-24 flex-shrink-0">
<Volume2 className="h-3.5 w-3.5" />
Input Gain
</label>
<div className="flex-1">
<Slider
value={settings.inputGain}
onChange={onInputGainChange}
min={0}
max={2}
step={0.1}
/>
</div>
<span className="text-xs text-muted-foreground w-12 text-right flex-shrink-0">
{settings.inputGain === 1 ? '0 dB' : `${(20 * Math.log10(settings.inputGain)).toFixed(1)} dB`}
</span>
</div>
{/* Mono/Stereo Toggle */}
<div className="flex items-center gap-2">
<label className="text-xs text-muted-foreground flex items-center gap-1 w-24 flex-shrink-0">
<Radio className="h-3.5 w-3.5" />
Channels
</label>
<div className="flex gap-1 flex-1">
<button
onClick={() => onRecordMonoChange(true)}
className={cn(
'flex-1 px-2 py-1 text-xs rounded transition-colors',
settings.recordMono
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80 text-muted-foreground'
)}
>
Mono
</button>
<button
onClick={() => onRecordMonoChange(false)}
className={cn(
'flex-1 px-2 py-1 text-xs rounded transition-colors',
!settings.recordMono
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80 text-muted-foreground'
)}
>
Stereo
</button>
</div>
</div>
{/* Sample Rate Selection */}
<div className="flex items-center gap-2">
<label className="text-xs text-muted-foreground w-24 flex-shrink-0">
Sample Rate
</label>
<div className="flex gap-1 flex-1">
{SAMPLE_RATES.map((rate) => (
<button
key={rate}
onClick={() => onSampleRateChange(rate)}
className={cn(
'flex-1 px-2 py-1 text-xs rounded transition-colors font-mono',
settings.sampleRate === rate
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80 text-muted-foreground'
)}
>
{rate / 1000}k
</button>
))}
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,532 @@
'use client';
import * as React from 'react';
import { X, RotateCcw } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import { RecordingSettings } from '@/components/recording/RecordingSettings';
import { cn } from '@/lib/utils/cn';
import type { RecordingSettings as RecordingSettingsType } from '@/lib/hooks/useRecording';
import type {
Settings,
AudioSettings,
UISettings,
EditorSettings,
PerformanceSettings,
} from '@/lib/hooks/useSettings';
export interface GlobalSettingsDialogProps {
open: boolean;
onClose: () => void;
recordingSettings: RecordingSettingsType;
onInputGainChange: (gain: number) => void;
onRecordMonoChange: (mono: boolean) => void;
onSampleRateChange: (sampleRate: number) => void;
settings: Settings;
onAudioSettingsChange: (updates: Partial<AudioSettings>) => void;
onUISettingsChange: (updates: Partial<UISettings>) => void;
onEditorSettingsChange: (updates: Partial<EditorSettings>) => void;
onPerformanceSettingsChange: (updates: Partial<PerformanceSettings>) => void;
onResetCategory: (category: 'audio' | 'ui' | 'editor' | 'performance') => void;
}
type TabType = 'recording' | 'audio' | 'editor' | 'interface' | 'performance';
export function GlobalSettingsDialog({
open,
onClose,
recordingSettings,
onInputGainChange,
onRecordMonoChange,
onSampleRateChange,
settings,
onAudioSettingsChange,
onUISettingsChange,
onEditorSettingsChange,
onPerformanceSettingsChange,
onResetCategory,
}: GlobalSettingsDialogProps) {
const [activeTab, setActiveTab] = React.useState<TabType>('recording');
if (!open) return null;
return (
<>
{/* Backdrop */}
<div
className="fixed inset-0 bg-background/80 backdrop-blur-sm z-40"
onClick={onClose}
/>
{/* Dialog */}
<div className="fixed left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2 w-full max-w-3xl z-50">
<div className="bg-card border border-border rounded-lg shadow-2xl overflow-hidden">
{/* Header */}
<div className="flex items-center justify-between px-6 py-4 border-b border-border">
<h2 className="text-lg font-semibold">Settings</h2>
<Button
variant="ghost"
size="icon-sm"
onClick={onClose}
title="Close"
>
<X className="h-4 w-4" />
</Button>
</div>
{/* Tabs */}
<div className="flex border-b border-border bg-muted/30 overflow-x-auto">
{[
{ id: 'recording', label: 'Recording' },
{ id: 'audio', label: 'Audio' },
{ id: 'editor', label: 'Editor' },
{ id: 'interface', label: 'Interface' },
{ id: 'performance', label: 'Performance' },
].map((tab) => (
<button
key={tab.id}
onClick={() => setActiveTab(tab.id as TabType)}
className={cn(
'px-6 py-3 text-sm font-medium transition-colors relative flex-shrink-0',
activeTab === tab.id
? 'text-foreground bg-card'
: 'text-muted-foreground hover:text-foreground hover:bg-muted/50'
)}
>
{tab.label}
{activeTab === tab.id && (
<div className="absolute bottom-0 left-0 right-0 h-0.5 bg-primary" />
)}
</button>
))}
</div>
{/* Content */}
<div className="p-6 max-h-[60vh] overflow-y-auto custom-scrollbar">
{/* Recording Tab */}
{activeTab === 'recording' && (
<div className="space-y-4">
<div className="flex items-center justify-between">
<h3 className="text-sm font-medium">Recording Settings</h3>
</div>
<RecordingSettings
settings={recordingSettings}
onInputGainChange={onInputGainChange}
onRecordMonoChange={onRecordMonoChange}
onSampleRateChange={onSampleRateChange}
className="border-0 bg-transparent p-0"
/>
<div className="pt-4 border-t border-border">
<h3 className="text-sm font-medium mb-2">Note</h3>
<p className="text-sm text-muted-foreground">
These settings apply globally to all recordings. Arm a track (red button)
to enable recording on that specific track.
</p>
</div>
</div>
)}
{/* Audio Tab */}
{activeTab === 'audio' && (
<div className="space-y-6">
<div className="flex items-center justify-between">
<h3 className="text-sm font-medium">Audio Settings</h3>
<Button
variant="ghost"
size="sm"
onClick={() => onResetCategory('audio')}
className="h-7 text-xs"
>
<RotateCcw className="h-3 w-3 mr-1" />
Reset
</Button>
</div>
{/* Buffer Size */}
<div className="space-y-2">
<label className="text-sm font-medium">Buffer Size</label>
<select
value={settings.audio.bufferSize}
onChange={(e) =>
onAudioSettingsChange({ bufferSize: Number(e.target.value) })
}
className="w-full px-3 py-2 bg-background border border-border rounded text-sm"
>
<option value={256}>256 samples (Low latency, higher CPU)</option>
<option value={512}>512 samples</option>
<option value={1024}>1024 samples</option>
<option value={2048}>2048 samples (Recommended)</option>
<option value={4096}>4096 samples (Low CPU)</option>
</select>
<p className="text-xs text-muted-foreground">
Smaller buffer = lower latency but higher CPU usage. Requires reload.
</p>
</div>
{/* Sample Rate */}
<div className="space-y-2">
<label className="text-sm font-medium">Default Sample Rate</label>
<select
value={settings.audio.sampleRate}
onChange={(e) =>
onAudioSettingsChange({ sampleRate: Number(e.target.value) })
}
className="w-full px-3 py-2 bg-background border border-border rounded text-sm"
>
<option value={44100}>44.1 kHz (CD Quality)</option>
<option value={48000}>48 kHz (Professional)</option>
<option value={96000}>96 kHz (Hi-Res Audio)</option>
</select>
<p className="text-xs text-muted-foreground">
Higher sample rate = better quality but larger file sizes.
</p>
</div>
{/* Auto Normalize */}
<div className="flex items-center justify-between p-3 bg-muted/50 rounded">
<div>
<div className="text-sm font-medium">Auto-Normalize on Import</div>
<p className="text-xs text-muted-foreground">
Automatically normalize audio when importing files
</p>
</div>
<input
type="checkbox"
checked={settings.audio.autoNormalizeOnImport}
onChange={(e) =>
onAudioSettingsChange({ autoNormalizeOnImport: e.target.checked })
}
className="h-4 w-4"
/>
</div>
</div>
)}
{/* Editor Tab */}
{activeTab === 'editor' && (
<div className="space-y-6">
<div className="flex items-center justify-between">
<h3 className="text-sm font-medium">Editor Settings</h3>
<Button
variant="ghost"
size="sm"
onClick={() => onResetCategory('editor')}
className="h-7 text-xs"
>
<RotateCcw className="h-3 w-3 mr-1" />
Reset
</Button>
</div>
{/* Auto-Save Interval */}
<div className="space-y-2">
<div className="flex items-center justify-between">
<label className="text-sm font-medium">Auto-Save Interval</label>
<span className="text-xs font-mono text-muted-foreground">
{settings.editor.autoSaveInterval === 0
? 'Disabled'
: `${settings.editor.autoSaveInterval}s`}
</span>
</div>
<Slider
value={[settings.editor.autoSaveInterval]}
onValueChange={([value]) =>
onEditorSettingsChange({ autoSaveInterval: value })
}
min={0}
max={30}
step={1}
className="w-full"
/>
<p className="text-xs text-muted-foreground">
Set to 0 to disable auto-save. Default: 3 seconds.
</p>
</div>
{/* Undo History Limit */}
<div className="space-y-2">
<div className="flex items-center justify-between">
<label className="text-sm font-medium">Undo History Limit</label>
<span className="text-xs font-mono text-muted-foreground">
{settings.editor.undoHistoryLimit} operations
</span>
</div>
<Slider
value={[settings.editor.undoHistoryLimit]}
onValueChange={([value]) =>
onEditorSettingsChange({ undoHistoryLimit: value })
}
min={10}
max={200}
step={10}
className="w-full"
/>
<p className="text-xs text-muted-foreground">
Higher values use more memory. Default: 50.
</p>
</div>
{/* Snap to Grid */}
<div className="flex items-center justify-between p-3 bg-muted/50 rounded">
<div>
<div className="text-sm font-medium">Snap to Grid</div>
<p className="text-xs text-muted-foreground">
Snap playhead and selections to grid lines
</p>
</div>
<input
type="checkbox"
checked={settings.editor.snapToGrid}
onChange={(e) =>
onEditorSettingsChange({ snapToGrid: e.target.checked })
}
className="h-4 w-4"
/>
</div>
{/* Grid Resolution */}
<div className="space-y-2">
<div className="flex items-center justify-between">
<label className="text-sm font-medium">Grid Resolution</label>
<span className="text-xs font-mono text-muted-foreground">
{settings.editor.gridResolution}s
</span>
</div>
<Slider
value={[settings.editor.gridResolution]}
onValueChange={([value]) =>
onEditorSettingsChange({ gridResolution: value })
}
min={0.1}
max={5}
step={0.1}
className="w-full"
/>
<p className="text-xs text-muted-foreground">
Grid spacing in seconds. Default: 1.0s.
</p>
</div>
{/* Default Zoom */}
<div className="space-y-2">
<div className="flex items-center justify-between">
<label className="text-sm font-medium">Default Zoom Level</label>
<span className="text-xs font-mono text-muted-foreground">
{settings.editor.defaultZoom}x
</span>
</div>
<Slider
value={[settings.editor.defaultZoom]}
onValueChange={([value]) =>
onEditorSettingsChange({ defaultZoom: value })
}
min={1}
max={20}
step={1}
className="w-full"
/>
<p className="text-xs text-muted-foreground">
Initial zoom level when opening projects. Default: 1x.
</p>
</div>
</div>
)}
{/* Interface Tab */}
{activeTab === 'interface' && (
<div className="space-y-6">
<div className="flex items-center justify-between">
<h3 className="text-sm font-medium">Interface Settings</h3>
<Button
variant="ghost"
size="sm"
onClick={() => onResetCategory('ui')}
className="h-7 text-xs"
>
<RotateCcw className="h-3 w-3 mr-1" />
Reset
</Button>
</div>
{/* Theme */}
<div className="space-y-2">
<label className="text-sm font-medium">Theme</label>
<div className="flex gap-2">
{['dark', 'light', 'auto'].map((theme) => (
<button
key={theme}
onClick={() =>
onUISettingsChange({ theme: theme as 'dark' | 'light' | 'auto' })
}
className={cn(
'flex-1 px-4 py-2 rounded text-sm font-medium transition-colors',
settings.ui.theme === theme
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80'
)}
>
{theme.charAt(0).toUpperCase() + theme.slice(1)}
</button>
))}
</div>
<p className="text-xs text-muted-foreground">
Use the theme toggle in header for quick switching.
</p>
</div>
{/* Font Size */}
<div className="space-y-2">
<label className="text-sm font-medium">Font Size</label>
<div className="flex gap-2">
{['small', 'medium', 'large'].map((size) => (
<button
key={size}
onClick={() =>
onUISettingsChange({ fontSize: size as 'small' | 'medium' | 'large' })
}
className={cn(
'flex-1 px-4 py-2 rounded text-sm font-medium transition-colors',
settings.ui.fontSize === size
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80'
)}
>
{size.charAt(0).toUpperCase() + size.slice(1)}
</button>
))}
</div>
<p className="text-xs text-muted-foreground">
Adjust the UI font size. Requires reload.
</p>
</div>
</div>
)}
{/* Performance Tab */}
{activeTab === 'performance' && (
<div className="space-y-6">
<div className="flex items-center justify-between">
<h3 className="text-sm font-medium">Performance Settings</h3>
<Button
variant="ghost"
size="sm"
onClick={() => onResetCategory('performance')}
className="h-7 text-xs"
>
<RotateCcw className="h-3 w-3 mr-1" />
Reset
</Button>
</div>
{/* Peak Calculation Quality */}
<div className="space-y-2">
<label className="text-sm font-medium">Peak Calculation Quality</label>
<div className="flex gap-2">
{['low', 'medium', 'high'].map((quality) => (
<button
key={quality}
onClick={() =>
onPerformanceSettingsChange({
peakCalculationQuality: quality as 'low' | 'medium' | 'high',
})
}
className={cn(
'flex-1 px-4 py-2 rounded text-sm font-medium transition-colors',
settings.performance.peakCalculationQuality === quality
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80'
)}
>
{quality.charAt(0).toUpperCase() + quality.slice(1)}
</button>
))}
</div>
<p className="text-xs text-muted-foreground">
Higher quality = more accurate waveforms, slower processing.
</p>
</div>
{/* Waveform Rendering Quality */}
<div className="space-y-2">
<label className="text-sm font-medium">Waveform Rendering Quality</label>
<div className="flex gap-2">
{['low', 'medium', 'high'].map((quality) => (
<button
key={quality}
onClick={() =>
onPerformanceSettingsChange({
waveformRenderingQuality: quality as 'low' | 'medium' | 'high',
})
}
className={cn(
'flex-1 px-4 py-2 rounded text-sm font-medium transition-colors',
settings.performance.waveformRenderingQuality === quality
? 'bg-primary text-primary-foreground'
: 'bg-muted hover:bg-muted/80'
)}
>
{quality.charAt(0).toUpperCase() + quality.slice(1)}
</button>
))}
</div>
<p className="text-xs text-muted-foreground">
Lower quality = better performance on slower devices.
</p>
</div>
{/* Enable Spectrogram */}
<div className="flex items-center justify-between p-3 bg-muted/50 rounded">
<div>
<div className="text-sm font-medium">Enable Spectrogram</div>
<p className="text-xs text-muted-foreground">
Show spectrogram in analysis tools (requires more CPU)
</p>
</div>
<input
type="checkbox"
checked={settings.performance.enableSpectrogram}
onChange={(e) =>
onPerformanceSettingsChange({ enableSpectrogram: e.target.checked })
}
className="h-4 w-4"
/>
</div>
{/* Max File Size */}
<div className="space-y-2">
<div className="flex items-center justify-between">
<label className="text-sm font-medium">Maximum File Size</label>
<span className="text-xs font-mono text-muted-foreground">
{settings.performance.maxFileSizeMB} MB
</span>
</div>
<Slider
value={[settings.performance.maxFileSizeMB]}
onValueChange={([value]) =>
onPerformanceSettingsChange({ maxFileSizeMB: value })
}
min={100}
max={1000}
step={50}
className="w-full"
/>
<p className="text-xs text-muted-foreground">
Warn when importing files larger than this. Default: 500 MB.
</p>
</div>
</div>
)}
</div>
{/* Footer */}
<div className="flex items-center justify-end gap-2 px-6 py-4 border-t border-border bg-muted/30">
<Button variant="default" onClick={onClose}>
Done
</Button>
</div>
</div>
</div>
</>
);
}

View File

@@ -0,0 +1,265 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
import {
timeToPixel,
pixelToTime,
calculateTickInterval,
formatTimeLabel,
getVisibleTimeRange,
} from '@/lib/utils/timeline';
export interface TimeScaleProps {
duration: number;
zoom: number;
currentTime: number;
onSeek?: (time: number) => void;
className?: string;
height?: number;
controlsWidth?: number;
scrollRef?: React.MutableRefObject<HTMLDivElement | null>;
onScroll?: () => void;
}
export function TimeScale({
duration,
zoom,
currentTime,
onSeek,
className,
height = 40,
controlsWidth = 240,
scrollRef: externalScrollRef,
onScroll,
}: TimeScaleProps) {
const localScrollRef = React.useRef<HTMLDivElement>(null);
const scrollRef = externalScrollRef || localScrollRef;
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const [viewportWidth, setViewportWidth] = React.useState(800);
const [scrollLeft, setScrollLeft] = React.useState(0);
const [hoverTime, setHoverTime] = React.useState<number | null>(null);
// Calculate total timeline width (match waveform calculation)
// Uses 5 pixels per second as base scale, multiplied by zoom
// Always ensure minimum width is at least viewport width for full coverage
const PIXELS_PER_SECOND_BASE = 5;
const totalWidth = React.useMemo(() => {
if (zoom >= 1) {
const calculatedWidth = duration * zoom * PIXELS_PER_SECOND_BASE;
// Ensure it's at least viewport width so timeline always fills
return Math.max(calculatedWidth, viewportWidth);
}
return viewportWidth;
}, [duration, zoom, viewportWidth]);
// Update viewport width on resize
React.useEffect(() => {
const scroller = scrollRef.current;
if (!scroller) return;
const updateWidth = () => {
setViewportWidth(scroller.clientWidth);
};
updateWidth();
const resizeObserver = new ResizeObserver(updateWidth);
resizeObserver.observe(scroller);
return () => resizeObserver.disconnect();
}, [scrollRef]);
// Handle scroll - update scrollLeft and trigger onScroll callback
const handleScroll = React.useCallback(() => {
if (scrollRef.current) {
setScrollLeft(scrollRef.current.scrollLeft);
}
if (onScroll) {
onScroll();
}
}, [onScroll, scrollRef]);
// Draw time scale - redraws on scroll and zoom
React.useEffect(() => {
const canvas = canvasRef.current;
if (!canvas || duration === 0) return;
const ctx = canvas.getContext('2d');
if (!ctx) return;
// Set canvas size to viewport width
const dpr = window.devicePixelRatio || 1;
canvas.width = viewportWidth * dpr;
canvas.height = height * dpr;
canvas.style.width = `${viewportWidth}px`;
canvas.style.height = `${height}px`;
ctx.scale(dpr, dpr);
// Clear canvas
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('--color-background') || '#ffffff';
ctx.fillRect(0, 0, viewportWidth, height);
// Calculate visible time range
const visibleRange = getVisibleTimeRange(scrollLeft, viewportWidth, duration, zoom);
const visibleDuration = visibleRange.end - visibleRange.start;
// Calculate tick intervals based on visible duration
const { major, minor } = calculateTickInterval(visibleDuration);
// Calculate which ticks to draw (only visible ones)
const startTick = Math.floor(visibleRange.start / minor) * minor;
const endTick = Math.ceil(visibleRange.end / minor) * minor;
// Set up text style for labels
ctx.font = '12px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
ctx.textAlign = 'center';
ctx.textBaseline = 'top';
// Draw ticks and labels
for (let time = startTick; time <= endTick; time += minor) {
if (time < 0 || time > duration) continue;
// Calculate x position using the actual totalWidth (not timeToPixel which recalculates)
const x = (time / duration) * totalWidth - scrollLeft;
if (x < 0 || x > viewportWidth) continue;
const isMajor = Math.abs(time % major) < 0.001;
if (isMajor) {
// Major ticks - tall and prominent
ctx.strokeStyle = getComputedStyle(canvas).getPropertyValue('--color-foreground') || '#000000';
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(x, height - 20);
ctx.lineTo(x, height);
ctx.stroke();
// Major tick label
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('--color-foreground') || '#000000';
const label = formatTimeLabel(time, visibleDuration < 10);
ctx.fillText(label, x, 6);
} else {
// Minor ticks - shorter and lighter
ctx.strokeStyle = getComputedStyle(canvas).getPropertyValue('--color-muted-foreground') || '#9ca3af';
ctx.lineWidth = 1;
ctx.beginPath();
ctx.moveTo(x, height - 10);
ctx.lineTo(x, height);
ctx.stroke();
// Minor tick label (smaller and lighter)
if (x > 20 && x < viewportWidth - 20) {
ctx.fillStyle = getComputedStyle(canvas).getPropertyValue('--color-muted-foreground') || '#9ca3af';
ctx.font = '10px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
const label = formatTimeLabel(time, visibleDuration < 10);
ctx.fillText(label, x, 8);
ctx.font = '12px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
}
}
}
// Draw playhead indicator
const playheadX = (currentTime / duration) * totalWidth - scrollLeft;
if (playheadX >= 0 && playheadX <= viewportWidth) {
ctx.strokeStyle = '#ef4444';
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(playheadX, 0);
ctx.lineTo(playheadX, height);
ctx.stroke();
}
// Draw hover indicator
if (hoverTime !== null) {
const hoverX = (hoverTime / duration) * totalWidth - scrollLeft;
if (hoverX >= 0 && hoverX <= viewportWidth) {
ctx.strokeStyle = 'rgba(59, 130, 246, 0.5)';
ctx.lineWidth = 1;
ctx.setLineDash([3, 3]);
ctx.beginPath();
ctx.moveTo(hoverX, 0);
ctx.lineTo(hoverX, height);
ctx.stroke();
ctx.setLineDash([]);
}
}
}, [duration, zoom, currentTime, viewportWidth, scrollLeft, height, hoverTime, totalWidth]);
// Handle click to seek
const handleClick = React.useCallback(
(e: React.MouseEvent<HTMLCanvasElement>) => {
if (!onSeek) return;
const rect = e.currentTarget.getBoundingClientRect();
const x = e.clientX - rect.left;
const pixelPos = x + scrollLeft;
const time = (pixelPos / totalWidth) * duration;
onSeek(Math.max(0, Math.min(duration, time)));
},
[onSeek, duration, totalWidth, scrollLeft]
);
// Handle mouse move for hover
const handleMouseMove = React.useCallback(
(e: React.MouseEvent<HTMLCanvasElement>) => {
const rect = e.currentTarget.getBoundingClientRect();
const x = e.clientX - rect.left;
const pixelPos = x + scrollLeft;
const time = (pixelPos / totalWidth) * duration;
setHoverTime(Math.max(0, Math.min(duration, time)));
},
[duration, totalWidth, scrollLeft]
);
const handleMouseLeave = React.useCallback(() => {
setHoverTime(null);
}, []);
return (
<div className={cn('relative bg-background', className)} style={{ paddingLeft: '240px', paddingRight: '250px' }}>
<div
ref={scrollRef}
className="w-full bg-background overflow-x-auto overflow-y-hidden custom-scrollbar"
style={{
height: `${height}px`,
}}
onScroll={handleScroll}
>
{/* Spacer to create scrollable width */}
<div style={{ width: `${totalWidth}px`, height: `${height}px`, position: 'relative' }}>
<canvas
ref={canvasRef}
onClick={handleClick}
onMouseMove={handleMouseMove}
onMouseLeave={handleMouseLeave}
className="cursor-pointer"
style={{
position: 'sticky',
left: 0,
width: `${viewportWidth}px`,
height: `${height}px`,
}}
/>
</div>
</div>
{/* Hover tooltip */}
{hoverTime !== null && (
<div
className="absolute top-full mt-1 px-2 py-1 bg-popover border border-border rounded shadow-lg text-xs font-mono pointer-events-none z-10"
style={{
left: `${Math.min(
viewportWidth - 60 + controlsWidth,
Math.max(controlsWidth, (hoverTime / duration) * totalWidth - scrollLeft - 30 + controlsWidth)
)}px`,
}}
>
{formatTimeLabel(hoverTime, true)}
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,159 @@
'use client';
import * as React from 'react';
import { Upload, Plus } from 'lucide-react';
import { Modal } from '@/components/ui/Modal';
import { Button } from '@/components/ui/Button';
import { decodeAudioFile } from '@/lib/audio/decoder';
export interface ImportTrackDialogProps {
open: boolean;
onClose: () => void;
onImportTrack: (buffer: AudioBuffer, name: string) => void;
}
export function ImportTrackDialog({
open,
onClose,
onImportTrack,
}: ImportTrackDialogProps) {
const [isDragging, setIsDragging] = React.useState(false);
const [isLoading, setIsLoading] = React.useState(false);
const fileInputRef = React.useRef<HTMLInputElement>(null);
const handleFiles = async (files: FileList) => {
setIsLoading(true);
// Convert FileList to Array to prevent any weird behavior
const fileArray = Array.from(files);
console.log(`[ImportTrackDialog] Processing ${fileArray.length} files`, fileArray);
try {
// Process files sequentially
for (let i = 0; i < fileArray.length; i++) {
console.log(`[ImportTrackDialog] Loop iteration ${i}, fileArray.length: ${fileArray.length}`);
const file = fileArray[i];
console.log(`[ImportTrackDialog] Processing file ${i + 1}/${fileArray.length}: ${file.name}, type: ${file.type}`);
if (!file.type.startsWith('audio/')) {
console.warn(`Skipping non-audio file: ${file.name} (type: ${file.type})`);
continue;
}
try {
console.log(`[ImportTrackDialog] Decoding file ${i + 1}/${files.length}: ${file.name}`);
const buffer = await decodeAudioFile(file);
const trackName = file.name.replace(/\.[^/.]+$/, ''); // Remove extension
console.log(`[ImportTrackDialog] Importing track: ${trackName}`);
onImportTrack(buffer, trackName);
console.log(`[ImportTrackDialog] Track imported: ${trackName}`);
} catch (error) {
console.error(`Failed to import ${file.name}:`, error);
}
console.log(`[ImportTrackDialog] Finished processing file ${i + 1}`);
}
console.log('[ImportTrackDialog] Loop completed, all files processed');
} catch (error) {
console.error('[ImportTrackDialog] Error in handleFiles:', error);
} finally {
setIsLoading(false);
console.log('[ImportTrackDialog] Closing dialog');
onClose();
}
};
const handleFileSelect = (e: React.ChangeEvent<HTMLInputElement>) => {
const files = e.target.files;
if (files && files.length > 0) {
handleFiles(files);
}
// Reset input
e.target.value = '';
};
const handleDragOver = (e: React.DragEvent) => {
e.preventDefault();
setIsDragging(true);
};
const handleDragLeave = (e: React.DragEvent) => {
e.preventDefault();
setIsDragging(false);
};
const handleDrop = (e: React.DragEvent) => {
e.preventDefault();
setIsDragging(false);
const files = e.dataTransfer.files;
if (files && files.length > 0) {
handleFiles(files);
}
};
return (
<Modal
open={open}
onClose={onClose}
title="Import Audio as Tracks"
description="Import one or more audio files as new tracks"
>
<div className="space-y-4">
{/* Drag and Drop Area */}
<div
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
className={`
border-2 border-dashed rounded-lg p-8 text-center transition-colors
${isDragging
? 'border-primary bg-primary/10'
: 'border-border hover:border-primary/50'
}
${isLoading ? 'opacity-50 pointer-events-none' : ''}
`}
>
<Upload className="h-12 w-12 mx-auto mb-4 text-muted-foreground" />
<p className="text-sm text-foreground mb-2">
{isLoading ? 'Importing files...' : 'Drag and drop audio files here'}
</p>
<p className="text-xs text-muted-foreground mb-4">
or
</p>
<Button
onClick={() => fileInputRef.current?.click()}
disabled={isLoading}
>
<Plus className="h-4 w-4 mr-2" />
Choose Files
</Button>
<input
ref={fileInputRef}
type="file"
accept="audio/*"
multiple
onChange={handleFileSelect}
className="hidden"
/>
</div>
{/* Supported Formats */}
<div className="text-xs text-muted-foreground">
<p className="font-medium mb-1">Supported formats:</p>
<p>MP3, WAV, OGG, FLAC, M4A, AAC, and more</p>
<p className="mt-2">
💡 Tip: Select multiple files at once or drag multiple files to import them all as separate tracks
</p>
</div>
{/* Actions */}
<div className="flex justify-end gap-2 pt-2 border-t border-border">
<Button variant="outline" onClick={onClose} disabled={isLoading}>
Cancel
</Button>
</div>
</div>
</Modal>
);
}

882
components/tracks/Track.tsx Normal file
View File

@@ -0,0 +1,882 @@
"use client";
import * as React from "react";
import {
Volume2,
VolumeX,
Headphones,
Trash2,
ChevronDown,
ChevronRight,
ChevronUp,
UnfoldHorizontal,
Upload,
Mic,
Gauge,
Circle,
Sparkles,
} from "lucide-react";
import type { Track as TrackType } from "@/types/track";
import {
COLLAPSED_TRACK_HEIGHT,
MIN_TRACK_HEIGHT,
MAX_TRACK_HEIGHT,
DEFAULT_TRACK_HEIGHT,
} from "@/types/track";
import { Button } from "@/components/ui/Button";
import { Slider } from "@/components/ui/Slider";
import { cn } from "@/lib/utils/cn";
import type { EffectType } from "@/lib/audio/effects/chain";
import { TrackControls } from "./TrackControls";
import { AutomationLane } from "@/components/automation/AutomationLane";
import type {
AutomationLane as AutomationLaneType,
AutomationPoint as AutomationPointType,
} from "@/types/automation";
import { createAutomationPoint } from "@/lib/audio/automation/utils";
import { createAutomationLane } from "@/lib/audio/automation-utils";
import { EffectDevice } from "@/components/effects/EffectDevice";
import { EffectBrowser } from "@/components/effects/EffectBrowser";
import { ImportDialog } from "@/components/dialogs/ImportDialog";
import { importAudioFile, type ImportOptions } from "@/lib/audio/decoder";
export interface TrackProps {
track: TrackType;
zoom: number;
currentTime: number;
duration: number;
isSelected?: boolean;
onSelect?: () => void;
onToggleMute: () => void;
onToggleSolo: () => void;
onToggleCollapse: () => void;
onVolumeChange: (volume: number) => void;
onPanChange: (pan: number) => void;
onRemove: () => void;
onNameChange: (name: string) => void;
onUpdateTrack: (trackId: string, updates: Partial<TrackType>) => void;
onSeek?: (time: number) => void;
onLoadAudio?: (buffer: AudioBuffer) => void;
onToggleEffect?: (effectId: string) => void;
onRemoveEffect?: (effectId: string) => void;
onUpdateEffect?: (effectId: string, parameters: any) => void;
onAddEffect?: (effectType: EffectType) => void;
onSelectionChange?: (
selection: { start: number; end: number } | null,
) => void;
onToggleRecordEnable?: () => void;
isRecording?: boolean;
recordingLevel?: number;
playbackLevel?: number;
onParameterTouched?: (
trackId: string,
laneId: string,
touched: boolean,
) => void;
isPlaying?: boolean;
renderControlsOnly?: boolean;
renderWaveformOnly?: boolean;
}
export function Track({
track,
zoom,
currentTime,
duration,
isSelected,
onSelect,
onToggleMute,
onToggleSolo,
onToggleCollapse,
onVolumeChange,
onPanChange,
onRemove,
onNameChange,
onUpdateTrack,
onSeek,
onLoadAudio,
onToggleEffect,
onRemoveEffect,
onUpdateEffect,
onAddEffect,
onSelectionChange,
onToggleRecordEnable,
isRecording = false,
recordingLevel = 0,
playbackLevel = 0,
onParameterTouched,
isPlaying = false,
renderControlsOnly = false,
renderWaveformOnly = false,
}: TrackProps) {
const canvasRef = React.useRef<HTMLCanvasElement>(null);
const containerRef = React.useRef<HTMLDivElement>(null);
const fileInputRef = React.useRef<HTMLInputElement>(null);
const [themeKey, setThemeKey] = React.useState(0);
const [isResizing, setIsResizing] = React.useState(false);
const resizeStartRef = React.useRef({ y: 0, height: 0 });
const [effectBrowserOpen, setEffectBrowserOpen] = React.useState(false);
// Import dialog state
const [showImportDialog, setShowImportDialog] = React.useState(false);
const [pendingFile, setPendingFile] = React.useState<File | null>(null);
const [fileMetadata, setFileMetadata] = React.useState<{
sampleRate?: number;
channels?: number;
}>({});
// Selection state
const [isSelecting, setIsSelecting] = React.useState(false);
const [selectionStart, setSelectionStart] = React.useState<number | null>(
null,
);
const [isSelectingByDrag, setIsSelectingByDrag] = React.useState(false);
const [dragStartPos, setDragStartPos] = React.useState<{
x: number;
y: number;
} | null>(null);
// Touch callbacks for automation recording
const handlePanTouchStart = React.useCallback(() => {
if (isPlaying && onParameterTouched) {
const panLane = track.automation.lanes.find(
(l) => l.parameterId === "pan",
);
if (panLane && (panLane.mode === "touch" || panLane.mode === "latch")) {
queueMicrotask(() => onParameterTouched(track.id, panLane.id, true));
}
}
}, [isPlaying, onParameterTouched, track.id, track.automation.lanes]);
const handlePanTouchEnd = React.useCallback(() => {
if (isPlaying && onParameterTouched) {
const panLane = track.automation.lanes.find(
(l) => l.parameterId === "pan",
);
if (panLane && (panLane.mode === "touch" || panLane.mode === "latch")) {
queueMicrotask(() => onParameterTouched(track.id, panLane.id, false));
}
}
}, [isPlaying, onParameterTouched, track.id, track.automation.lanes]);
const handleVolumeTouchStart = React.useCallback(() => {
if (isPlaying && onParameterTouched) {
const volumeLane = track.automation.lanes.find(
(l) => l.parameterId === "volume",
);
if (
volumeLane &&
(volumeLane.mode === "touch" || volumeLane.mode === "latch")
) {
queueMicrotask(() => onParameterTouched(track.id, volumeLane.id, true));
}
}
}, [isPlaying, onParameterTouched, track.id, track.automation.lanes]);
const handleVolumeTouchEnd = React.useCallback(() => {
if (isPlaying && onParameterTouched) {
const volumeLane = track.automation.lanes.find(
(l) => l.parameterId === "volume",
);
if (
volumeLane &&
(volumeLane.mode === "touch" || volumeLane.mode === "latch")
) {
queueMicrotask(() =>
onParameterTouched(track.id, volumeLane.id, false),
);
}
}
}, [isPlaying, onParameterTouched, track.id, track.automation.lanes]);
// Auto-create automation lane for selected parameter if it doesn't exist
React.useEffect(() => {
if (!track.automation?.showAutomation) return;
const selectedParameterId =
track.automation.selectedParameterId || "volume";
const laneExists = track.automation.lanes.some(
(lane) => lane.parameterId === selectedParameterId,
);
if (!laneExists) {
// Build list of available parameters
const availableParameters: Array<{ id: string; name: string }> = [
{ id: "volume", name: "Volume" },
{ id: "pan", name: "Pan" },
];
track.effectChain.effects.forEach((effect) => {
if (effect.parameters) {
Object.keys(effect.parameters).forEach((paramKey) => {
const parameterId = `effect.${effect.id}.${paramKey}`;
const paramName = `${effect.name} - ${paramKey.charAt(0).toUpperCase() + paramKey.slice(1)}`;
availableParameters.push({ id: parameterId, name: paramName });
});
}
});
const paramInfo = availableParameters.find(
(p) => p.id === selectedParameterId,
);
if (paramInfo) {
// Determine value range based on parameter type
let valueRange = { min: 0, max: 1 };
let unit = "";
let formatter: ((value: number) => string) | undefined;
if (selectedParameterId === "volume") {
unit = "dB";
} else if (selectedParameterId === "pan") {
formatter = (value: number) => {
if (value === 0.5) return "C";
if (value < 0.5)
return `${Math.abs((0.5 - value) * 200).toFixed(0)}L`;
return `${((value - 0.5) * 200).toFixed(0)}R`;
};
} else if (selectedParameterId.startsWith("effect.")) {
// Parse effect parameter: effect.{effectId}.{paramName}
const parts = selectedParameterId.split(".");
if (parts.length === 3) {
const paramName = parts[2];
// Set ranges based on parameter name
if (paramName === "frequency") {
valueRange = { min: 20, max: 20000 };
unit = "Hz";
} else if (paramName === "Q") {
valueRange = { min: 0.1, max: 20 };
} else if (paramName === "gain") {
valueRange = { min: -40, max: 40 };
unit = "dB";
}
}
}
const newLane = createAutomationLane(
track.id,
selectedParameterId,
paramInfo.name,
{
min: valueRange.min,
max: valueRange.max,
unit,
formatter,
},
);
onUpdateTrack(track.id, {
automation: {
...track.automation,
lanes: [...track.automation.lanes, newLane],
selectedParameterId,
},
});
}
}
}, [
track.automation?.showAutomation,
track.automation?.selectedParameterId,
track.automation?.lanes,
track.effectChain.effects,
track.id,
onUpdateTrack,
]);
// Listen for theme changes
React.useEffect(() => {
const observer = new MutationObserver(() => {
// Increment key to force waveform redraw
setThemeKey((prev) => prev + 1);
});
// Watch for class changes on document element (dark mode toggle)
observer.observe(document.documentElement, {
attributes: true,
attributeFilter: ["class"],
});
return () => observer.disconnect();
}, []);
// Draw waveform
React.useEffect(() => {
if (!track.audioBuffer || !canvasRef.current) return;
const canvas = canvasRef.current;
const ctx = canvas.getContext("2d");
if (!ctx) return;
// Use parent container's size since canvas is absolute positioned
const parent = canvas.parentElement;
if (!parent) return;
const dpr = window.devicePixelRatio || 1;
const rect = parent.getBoundingClientRect();
canvas.width = rect.width * dpr;
canvas.height = rect.height * dpr;
ctx.scale(dpr, dpr);
const width = rect.width;
const height = rect.height;
// Clear canvas with theme color
const bgColor =
getComputedStyle(canvas).getPropertyValue("--color-waveform-bg") ||
"rgb(15, 23, 42)";
ctx.fillStyle = bgColor;
ctx.fillRect(0, 0, width, height);
const buffer = track.audioBuffer;
const channelData = buffer.getChannelData(0);
// Calculate samples per pixel based on the total width
// Must match the timeline calculation exactly
const PIXELS_PER_SECOND_BASE = 5;
let totalWidth;
if (zoom >= 1) {
const calculatedWidth = duration * zoom * PIXELS_PER_SECOND_BASE;
totalWidth = Math.max(calculatedWidth, width);
} else {
totalWidth = width;
}
// Calculate how much of the canvas width this track's duration occupies
// If duration is 0 or invalid, use full width (first track scenario)
const trackDurationRatio = duration > 0 ? buffer.duration / duration : 1;
const trackWidth = Math.min(width * trackDurationRatio, width);
const samplesPerPixel = trackWidth > 0 ? buffer.length / trackWidth : 0;
// Draw waveform
ctx.fillStyle = track.color;
ctx.strokeStyle = track.color;
ctx.lineWidth = 1;
for (let x = 0; x < Math.floor(trackWidth); x++) {
const startSample = Math.floor(x * samplesPerPixel);
const endSample = Math.floor((x + 1) * samplesPerPixel);
let min = 1.0;
let max = -1.0;
for (let i = startSample; i < endSample && i < channelData.length; i++) {
const sample = channelData[i];
if (sample < min) min = sample;
if (sample > max) max = sample;
}
const y1 = (height / 2) * (1 - max);
const y2 = (height / 2) * (1 - min);
ctx.beginPath();
ctx.moveTo(x, y1);
ctx.lineTo(x, y2);
ctx.stroke();
}
// Draw center line
ctx.strokeStyle = "rgba(148, 163, 184, 0.2)";
ctx.lineWidth = 1;
ctx.beginPath();
ctx.moveTo(0, height / 2);
ctx.lineTo(width, height / 2);
ctx.stroke();
// Draw selection overlay
if (track.selection && duration > 0) {
const selStartX = (track.selection.start / duration) * width;
const selEndX = (track.selection.end / duration) * width;
// Draw selection background
ctx.fillStyle = "rgba(59, 130, 246, 0.2)";
ctx.fillRect(selStartX, 0, selEndX - selStartX, height);
// Draw selection borders
ctx.strokeStyle = "rgba(59, 130, 246, 0.8)";
ctx.lineWidth = 2;
// Start border
ctx.beginPath();
ctx.moveTo(selStartX, 0);
ctx.lineTo(selStartX, height);
ctx.stroke();
// End border
ctx.beginPath();
ctx.moveTo(selEndX, 0);
ctx.lineTo(selEndX, height);
ctx.stroke();
}
// Draw playhead
if (duration > 0) {
const playheadX = (currentTime / duration) * width;
ctx.strokeStyle = "rgba(239, 68, 68, 0.8)";
ctx.lineWidth = 2;
ctx.beginPath();
ctx.moveTo(playheadX, 0);
ctx.lineTo(playheadX, height);
ctx.stroke();
}
}, [
track.audioBuffer,
track.color,
track.collapsed,
track.height,
zoom,
currentTime,
duration,
themeKey,
track.selection,
]);
const handleCanvasMouseDown = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!duration) return;
const rect = e.currentTarget.getBoundingClientRect();
const x = e.clientX - rect.left;
const y = e.clientY - rect.top;
const clickTime = (x / rect.width) * duration;
// Store drag start position
setDragStartPos({ x: e.clientX, y: e.clientY });
setIsSelectingByDrag(false);
// Start selection immediately (will be used if user drags)
setIsSelecting(true);
setSelectionStart(clickTime);
};
const handleCanvasMouseMove = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!isSelecting || selectionStart === null || !duration || !dragStartPos)
return;
const rect = e.currentTarget.getBoundingClientRect();
const x = e.clientX - rect.left;
const currentTime = (x / rect.width) * duration;
// Check if user has moved enough to be considered dragging (threshold: 3 pixels)
const dragDistance = Math.sqrt(
Math.pow(e.clientX - dragStartPos.x, 2) +
Math.pow(e.clientY - dragStartPos.y, 2),
);
if (dragDistance > 3) {
setIsSelectingByDrag(true);
}
// If dragging, update selection
if (isSelectingByDrag || dragDistance > 3) {
// Clamp to valid time range
const clampedTime = Math.max(0, Math.min(duration, currentTime));
// Update selection (ensure start < end)
const start = Math.min(selectionStart, clampedTime);
const end = Math.max(selectionStart, clampedTime);
onSelectionChange?.({ start, end });
}
};
const handleCanvasMouseUp = (e: React.MouseEvent<HTMLCanvasElement>) => {
if (!duration) return;
const rect = e.currentTarget.getBoundingClientRect();
const x = e.clientX - rect.left;
const clickTime = (x / rect.width) * duration;
// Check if user actually dragged (check distance directly, not state)
const didDrag = dragStartPos
? Math.sqrt(
Math.pow(e.clientX - dragStartPos.x, 2) +
Math.pow(e.clientY - dragStartPos.y, 2),
) > 3
: false;
// If user didn't drag (just clicked), clear selection and seek
if (!didDrag) {
onSelectionChange?.(null);
if (onSeek) {
onSeek(clickTime);
}
}
// Reset drag state
setIsSelecting(false);
setIsSelectingByDrag(false);
setDragStartPos(null);
};
// Handle mouse leaving canvas during selection
React.useEffect(() => {
const handleGlobalMouseUp = () => {
if (isSelecting) {
setIsSelecting(false);
setIsSelectingByDrag(false);
setDragStartPos(null);
}
};
window.addEventListener("mouseup", handleGlobalMouseUp);
return () => window.removeEventListener("mouseup", handleGlobalMouseUp);
}, [isSelecting]);
const handleFileChange = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file || !onLoadAudio) return;
try {
// Decode to get basic metadata before showing dialog
const arrayBuffer = await file.arrayBuffer();
const audioContext = new AudioContext();
const tempBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Set metadata and show import dialog
setFileMetadata({
sampleRate: tempBuffer.sampleRate,
channels: tempBuffer.numberOfChannels,
});
setPendingFile(file);
setShowImportDialog(true);
} catch (error) {
console.error("Failed to read audio file metadata:", error);
}
// Reset input
e.target.value = "";
};
const handleImport = async (options: ImportOptions) => {
if (!pendingFile || !onLoadAudio) return;
try {
setShowImportDialog(false);
const { buffer, metadata } = await importAudioFile(pendingFile, options);
onLoadAudio(buffer);
// Update track name to filename if it's still default
if (track.name === "New Track" || track.name === "Untitled Track") {
const fileName = metadata.fileName.replace(/\.[^/.]+$/, "");
onNameChange(fileName);
}
console.log("Audio imported:", metadata);
} catch (error) {
console.error("Failed to import audio file:", error);
} finally {
setPendingFile(null);
setFileMetadata({});
}
};
const handleImportCancel = () => {
setShowImportDialog(false);
setPendingFile(null);
setFileMetadata({});
};
const handleLoadAudioClick = () => {
fileInputRef.current?.click();
};
const [isDragging, setIsDragging] = React.useState(false);
const handleDragOver = (e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(true);
};
const handleDragLeave = (e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(false);
};
const handleDrop = async (e: React.DragEvent) => {
e.preventDefault();
e.stopPropagation();
setIsDragging(false);
const file = e.dataTransfer.files?.[0];
if (!file || !onLoadAudio) return;
// Check if it's an audio file
if (!file.type.startsWith("audio/")) {
console.warn("Dropped file is not an audio file");
return;
}
try {
const arrayBuffer = await file.arrayBuffer();
const audioContext = new AudioContext();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
onLoadAudio(audioBuffer);
// Update track name to filename if it's still default
if (track.name === "New Track" || track.name === "Untitled Track") {
const fileName = file.name.replace(/\.[^/.]+$/, "");
onNameChange(fileName);
}
} catch (error) {
console.error("Failed to load audio file:", error);
}
};
const trackHeight = track.collapsed
? COLLAPSED_TRACK_HEIGHT
: Math.max(track.height || DEFAULT_TRACK_HEIGHT, MIN_TRACK_HEIGHT);
// Track height resize handlers
const handleResizeStart = React.useCallback(
(e: React.MouseEvent) => {
if (track.collapsed) return;
e.preventDefault();
e.stopPropagation();
setIsResizing(true);
resizeStartRef.current = { y: e.clientY, height: track.height };
},
[track.collapsed, track.height],
);
React.useEffect(() => {
if (!isResizing) return;
const handleMouseMove = (e: MouseEvent) => {
const delta = e.clientY - resizeStartRef.current.y;
const newHeight = Math.max(
MIN_TRACK_HEIGHT,
Math.min(MAX_TRACK_HEIGHT, resizeStartRef.current.height + delta),
);
onUpdateTrack(track.id, { height: newHeight });
};
const handleMouseUp = () => {
setIsResizing(false);
};
window.addEventListener("mousemove", handleMouseMove);
window.addEventListener("mouseup", handleMouseUp);
return () => {
window.removeEventListener("mousemove", handleMouseMove);
window.removeEventListener("mouseup", handleMouseUp);
};
}, [isResizing, onUpdateTrack, track.id]);
// Render only controls
if (renderControlsOnly) {
return (
<>
<div
className={cn(
"w-full flex-shrink-0 border-b border-r-4 p-4 flex flex-col gap-4 min-h-0 transition-all duration-200 cursor-pointer border-border",
isSelected
? "bg-primary/10 border-r-primary"
: "bg-card border-r-transparent hover:bg-accent/30",
)}
style={{
height: `${trackHeight}px`,
}}
onClick={(e) => {
e.stopPropagation();
if (onSelect) onSelect();
}}
>
{/* Collapsed Header */}
{track.collapsed && (
<div
className={cn(
"group flex items-center gap-1.5 px-2 py-1 h-full w-full cursor-pointer transition-colors",
isSelected ? "bg-primary/10" : "hover:bg-accent/50",
)}
onClick={(e) => {
e.stopPropagation();
onToggleCollapse();
}}
title="Expand track"
>
<ChevronRight className="h-3 w-3 text-muted-foreground flex-shrink-0" />
<div
className="h-4 w-0.5 rounded-full flex-shrink-0"
style={{ backgroundColor: track.color }}
/>
<span className="text-xs font-semibold text-foreground truncate flex-1">
{String(track.name || "Untitled Track")}
</span>
</div>
)}
{/* Track Controls - Only show when not collapsed */}
{!track.collapsed && (
<div className="flex-1 flex flex-col items-center justify-center min-h-0 overflow-hidden">
{/* Integrated Track Controls (Pan + Fader + Buttons) */}
<TrackControls
trackName={track.name}
trackColor={track.color}
collapsed={track.collapsed}
volume={track.volume}
pan={track.pan}
peakLevel={
track.recordEnabled || isRecording
? recordingLevel
: playbackLevel
}
rmsLevel={
track.recordEnabled || isRecording
? recordingLevel * 0.7
: playbackLevel * 0.7
}
isMuted={track.mute}
isSolo={track.solo}
isRecordEnabled={track.recordEnabled}
showAutomation={track.automation?.showAutomation}
showEffects={track.showEffects}
isRecording={isRecording}
onNameChange={onNameChange}
onToggleCollapse={onToggleCollapse}
onVolumeChange={onVolumeChange}
onPanChange={onPanChange}
onMuteToggle={onToggleMute}
onSoloToggle={onToggleSolo}
onRecordToggle={onToggleRecordEnable}
onAutomationToggle={() => {
onUpdateTrack(track.id, {
automation: {
...track.automation,
showAutomation: !track.automation?.showAutomation,
},
});
}}
onEffectsClick={() => {
onUpdateTrack(track.id, {
showEffects: !track.showEffects,
});
}}
onVolumeTouchStart={handleVolumeTouchStart}
onVolumeTouchEnd={handleVolumeTouchEnd}
onPanTouchStart={handlePanTouchStart}
onPanTouchEnd={handlePanTouchEnd}
/>
</div>
)}
</div>
{/* Import Dialog - Only render in controls mode to avoid duplicates */}
<ImportDialog
open={showImportDialog}
onClose={handleImportCancel}
onImport={handleImport}
fileName={pendingFile?.name}
sampleRate={fileMetadata.sampleRate}
channels={fileMetadata.channels}
/>
</>
);
}
// Render only waveform
if (renderWaveformOnly) {
return (
<div
className={cn(
"relative bg-waveform-bg border-b transition-all duration-200 h-full",
isSelected && "bg-primary/5",
)}
>
{/* Inner container with dynamic width */}
<div
className="relative h-full"
style={{
minWidth:
track.audioBuffer && zoom >= 1
? `${duration * zoom * 5}px`
: "100%",
}}
>
{/* Delete Button - Top Right Overlay - Stays fixed when scrolling */}
<button
onClick={(e) => {
e.stopPropagation();
onRemove();
}}
className={cn(
"sticky top-2 right-2 float-right z-20 h-6 w-6 rounded flex items-center justify-center transition-all",
"bg-card/80 hover:bg-destructive/90 text-muted-foreground hover:text-white",
"border border-border/50 hover:border-destructive",
"backdrop-blur-sm shadow-sm hover:shadow-md",
)}
title="Remove track"
>
<Trash2 className="h-3 w-3" />
</button>
{track.audioBuffer ? (
<>
{/* Waveform Canvas */}
<canvas
ref={canvasRef}
className="absolute inset-0 w-full h-full cursor-pointer"
onMouseDown={handleCanvasMouseDown}
onMouseMove={handleCanvasMouseMove}
onMouseUp={handleCanvasMouseUp}
/>
</>
) : (
!track.collapsed && (
<>
{/* Empty state - clickable area for upload with drag & drop */}
<div
className={cn(
"absolute inset-0 w-full h-full transition-colors cursor-pointer",
isDragging
? "bg-primary/20 border-2 border-primary border-dashed"
: "hover:bg-accent/50",
)}
onClick={(e) => {
e.stopPropagation();
handleLoadAudioClick();
}}
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
/>
<input
ref={fileInputRef}
type="file"
accept="audio/*"
onChange={handleFileChange}
className="hidden"
/>
</>
)
)}
</div>
{/* Import Dialog - Also needed in waveform-only mode */}
<ImportDialog
open={showImportDialog}
onClose={handleImportCancel}
onImport={handleImport}
fileName={pendingFile?.name}
sampleRate={fileMetadata.sampleRate}
channels={fileMetadata.channels}
/>
</div>
);
}
// Render full track (both controls and waveform side by side)
// This mode is no longer used - tracks are rendered separately with renderControlsOnly and renderWaveformOnly
return (
<div
ref={containerRef}
className={cn(
"flex flex-col transition-all duration-200 relative",
isSelected && "bg-primary/5",
)}
>
{/* Full track content removed - now rendered separately in TrackList */}
<div>Track component should not be rendered in full mode anymore</div>
</div>
);
}

View File

@@ -0,0 +1,457 @@
'use client';
import * as React from 'react';
import { Circle, Headphones, MoreHorizontal, ChevronRight, ChevronDown, ChevronUp } from 'lucide-react';
import { CircularKnob } from '@/components/ui/CircularKnob';
import { TrackFader } from './TrackFader';
import { cn } from '@/lib/utils/cn';
export interface TrackControlsProps {
trackName: string;
trackColor: string;
collapsed: boolean;
volume: number;
pan: number;
peakLevel: number;
rmsLevel: number;
isMuted?: boolean;
isSolo?: boolean;
isRecordEnabled?: boolean;
showAutomation?: boolean;
showEffects?: boolean;
isRecording?: boolean;
mobileCollapsed?: boolean; // For mobile view collapsible controls
onNameChange: (name: string) => void;
onToggleCollapse: () => void;
onVolumeChange: (volume: number) => void;
onPanChange: (pan: number) => void;
onMuteToggle: () => void;
onSoloToggle?: () => void;
onRecordToggle?: () => void;
onAutomationToggle?: () => void;
onEffectsClick?: () => void;
onVolumeTouchStart?: () => void;
onVolumeTouchEnd?: () => void;
onPanTouchStart?: () => void;
onPanTouchEnd?: () => void;
onToggleMobileCollapse?: () => void;
className?: string;
}
export function TrackControls({
trackName,
trackColor,
collapsed,
volume,
pan,
peakLevel,
rmsLevel,
isMuted = false,
isSolo = false,
isRecordEnabled = false,
showAutomation = false,
showEffects = false,
isRecording = false,
mobileCollapsed = false,
onNameChange,
onToggleCollapse,
onVolumeChange,
onPanChange,
onMuteToggle,
onSoloToggle,
onRecordToggle,
onAutomationToggle,
onEffectsClick,
onVolumeTouchStart,
onVolumeTouchEnd,
onPanTouchStart,
onPanTouchEnd,
onToggleMobileCollapse,
className,
}: TrackControlsProps) {
const [isEditingName, setIsEditingName] = React.useState(false);
const [editName, setEditName] = React.useState(trackName);
const handleNameClick = () => {
setIsEditingName(true);
setEditName(trackName);
};
const handleNameBlur = () => {
setIsEditingName(false);
if (editName.trim() && editName !== trackName) {
onNameChange(editName.trim());
} else {
setEditName(trackName);
}
};
const handleNameKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
handleNameBlur();
} else if (e.key === 'Escape') {
setIsEditingName(false);
setEditName(trackName);
}
};
// Mobile collapsed view - minimal controls (like master controls)
if (mobileCollapsed) {
return (
<div className={cn(
'flex flex-col items-center gap-2 px-3 py-2 bg-card/50 border border-accent/50 rounded-lg w-full sm:hidden',
className
)}>
<div className="flex items-center justify-between w-full">
<div className="flex items-center gap-1 flex-1">
<button
onClick={onToggleCollapse}
className="p-0.5 hover:bg-accent/20 rounded transition-colors flex-shrink-0"
title={collapsed ? 'Expand track' : 'Collapse track'}
>
{collapsed ? (
<ChevronRight className="h-3 w-3 text-muted-foreground" />
) : (
<ChevronDown className="h-3 w-3 text-muted-foreground" />
)}
</button>
<div
className="text-xs font-bold uppercase tracking-wider"
style={{ color: trackColor }}
>
{trackName}
</div>
</div>
{onToggleMobileCollapse && (
<button
onClick={onToggleMobileCollapse}
className="p-1 hover:bg-accent/20 rounded transition-colors"
title="Expand track controls"
>
<ChevronDown className="h-3 w-3 text-muted-foreground" />
</button>
)}
</div>
<div className="flex items-center gap-1 w-full justify-center">
{onRecordToggle && (
<button
onClick={onRecordToggle}
className={cn(
'h-7 w-7 rounded-full flex items-center justify-center transition-all',
isRecordEnabled
? isRecording
? 'bg-red-500 shadow-lg shadow-red-500/50 animate-pulse'
: 'bg-red-500 shadow-md shadow-red-500/30'
: 'bg-card hover:bg-accent border border-border/50'
)}
title={isRecordEnabled ? 'Record Armed' : 'Arm for Recording'}
>
<Circle className={cn('h-3.5 w-3.5', isRecordEnabled ? 'fill-white text-white' : 'text-muted-foreground')} />
</button>
)}
<button
onClick={onMuteToggle}
className={cn(
'h-7 w-7 rounded-md flex items-center justify-center transition-all text-xs font-bold',
isMuted
? 'bg-blue-500 text-white shadow-md shadow-blue-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title={isMuted ? 'Unmute' : 'Mute'}
>
M
</button>
{onSoloToggle && (
<button
onClick={onSoloToggle}
className={cn(
'h-7 w-7 rounded-md flex items-center justify-center transition-all text-xs font-bold',
isSolo
? 'bg-yellow-500 text-white shadow-md shadow-yellow-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title={isSolo ? 'Unsolo' : 'Solo'}
>
S
</button>
)}
<div className="flex-1 h-2 bg-muted rounded-full overflow-hidden">
<div
className={cn(
'h-full transition-all',
peakLevel > 0.95 ? 'bg-red-500' : peakLevel > 0.8 ? 'bg-yellow-500' : 'bg-green-500'
)}
style={{ width: `${peakLevel * 100}%` }}
/>
</div>
</div>
</div>
);
}
// Mobile expanded view - full controls (like master controls)
const mobileExpandedView = (
<div className={cn(
'flex flex-col items-center gap-3 px-3 py-3 bg-card/50 border border-accent/50 rounded-lg w-full sm:hidden',
className
)}>
{/* Header with collapse button */}
<div className="flex items-center justify-between w-full">
<button
onClick={onToggleCollapse}
className="p-0.5 hover:bg-accent/20 rounded transition-colors flex-shrink-0"
title={collapsed ? 'Expand track' : 'Collapse track'}
>
{collapsed ? (
<ChevronRight className="h-3 w-3 text-muted-foreground" />
) : (
<ChevronDown className="h-3 w-3 text-muted-foreground" />
)}
</button>
<div
className="text-xs font-bold uppercase tracking-wider flex-1 text-center"
style={{ color: trackColor }}
>
{trackName}
</div>
{onToggleMobileCollapse && (
<button
onClick={onToggleMobileCollapse}
className="p-0.5 hover:bg-accent/20 rounded transition-colors flex-shrink-0"
title="Collapse track controls"
>
<ChevronUp className="h-3 w-3 text-muted-foreground" />
</button>
)}
</div>
{/* Pan Control */}
<CircularKnob
value={pan}
onChange={onPanChange}
onTouchStart={onPanTouchStart}
onTouchEnd={onPanTouchEnd}
min={-1}
max={1}
step={0.01}
label="PAN"
size={48}
formatValue={(value: number) => {
if (Math.abs(value) < 0.01) return 'C';
if (value < 0) return `${Math.abs(value * 100).toFixed(0)}L`;
return `${(value * 100).toFixed(0)}R`;
}}
/>
{/* Volume Fader - Full height, not compressed */}
<div className="flex-1 flex justify-center items-center w-full min-h-[160px]">
<TrackFader
value={volume}
peakLevel={peakLevel}
rmsLevel={rmsLevel}
onChange={onVolumeChange}
onTouchStart={onVolumeTouchStart}
onTouchEnd={onVolumeTouchEnd}
/>
</div>
{/* Control buttons */}
<div className="flex items-center gap-1 w-full justify-center">
{onRecordToggle && (
<button
onClick={onRecordToggle}
className={cn(
'h-8 w-8 rounded-full flex items-center justify-center transition-all',
isRecordEnabled
? isRecording
? 'bg-red-500 shadow-lg shadow-red-500/50 animate-pulse'
: 'bg-red-500 shadow-md shadow-red-500/30'
: 'bg-card hover:bg-accent border border-border/50'
)}
title={isRecordEnabled ? 'Record Armed' : 'Arm for Recording'}
>
<Circle className={cn('h-3.5 w-3.5', isRecordEnabled ? 'fill-white text-white' : 'text-muted-foreground')} />
</button>
)}
<button
onClick={onMuteToggle}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-xs font-bold',
isMuted
? 'bg-blue-500 text-white shadow-md shadow-blue-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title={isMuted ? 'Unmute' : 'Mute'}
>
M
</button>
{onSoloToggle && (
<button
onClick={onSoloToggle}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-xs font-bold',
isSolo
? 'bg-yellow-500 text-white shadow-md shadow-yellow-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title={isSolo ? 'Unsolo' : 'Solo'}
>
S
</button>
)}
{onEffectsClick && (
<button
onClick={onEffectsClick}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-xs font-bold',
showEffects
? 'bg-purple-500 text-white shadow-md shadow-purple-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title="Effects"
>
FX
</button>
)}
</div>
</div>
);
return (
<>
{/* Mobile view - Show expanded or collapsed */}
{!mobileCollapsed && mobileExpandedView}
{/* Desktop/tablet view - hidden on mobile */}
<div className={cn(
'flex flex-col items-center gap-3 px-4 py-3 bg-card/50 border border-accent/50 rounded-lg hidden sm:flex',
className
)}>
{/* Track Name Header with Collapse Chevron */}
<div className="flex items-center gap-1 w-full">
<button
onClick={onToggleCollapse}
className="p-0.5 hover:bg-accent/20 rounded transition-colors flex-shrink-0"
title={collapsed ? 'Expand track' : 'Collapse track'}
>
{collapsed ? (
<ChevronRight className="h-3 w-3 text-muted-foreground" />
) : (
<ChevronDown className="h-3 w-3 text-muted-foreground" />
)}
</button>
<div className="flex-1 flex items-center justify-center min-w-0">
{isEditingName ? (
<input
type="text"
value={editName}
onChange={(e) => setEditName(e.target.value)}
onBlur={handleNameBlur}
onKeyDown={handleNameKeyDown}
autoFocus
className="w-24 text-[10px] font-bold uppercase tracking-wider text-center bg-transparent border-b focus:outline-none px-1"
style={{ color: trackColor, borderColor: trackColor }}
/>
) : (
<div
onClick={handleNameClick}
className="w-24 text-[10px] font-bold uppercase tracking-wider text-center cursor-text hover:bg-accent/10 px-1 rounded transition-colors truncate"
style={{ color: trackColor }}
title="Click to edit track name"
>
{trackName}
</div>
)}
</div>
{/* Spacer to balance the chevron and center the label */}
<div className="p-0.5 flex-shrink-0 w-4" />
</div>
{/* Pan Control - Top */}
<div className="flex justify-center w-full">
<CircularKnob
value={pan}
onChange={onPanChange}
onTouchStart={onPanTouchStart}
onTouchEnd={onPanTouchEnd}
min={-1}
max={1}
step={0.01}
label="PAN"
size={48}
formatValue={(value: number) => {
if (Math.abs(value) < 0.01) return 'C';
if (value < 0) return `${Math.abs(value * 100).toFixed(0)}L`;
return `${(value * 100).toFixed(0)}R`;
}}
/>
</div>
{/* Track Fader - Center (vertically centered in remaining space) */}
<div className="flex justify-center items-center flex-1 w-full">
<TrackFader
value={volume}
peakLevel={peakLevel}
rmsLevel={rmsLevel}
onChange={onVolumeChange}
onTouchStart={onVolumeTouchStart}
onTouchEnd={onVolumeTouchEnd}
/>
</div>
{/* Control Buttons - Bottom */}
<div className="flex flex-col gap-1 w-full">
{/* Control Buttons Row 1: R/M/S */}
<div className="flex items-center gap-1 w-full justify-center">
{/* Record Arm */}
{onRecordToggle && (
<button
onClick={onRecordToggle}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-[11px] font-bold',
isRecordEnabled
? 'bg-red-500 text-white shadow-md shadow-red-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50',
isRecording && 'animate-pulse'
)}
title="Arm track for recording"
>
<Circle className="h-3 w-3 fill-current" />
</button>
)}
{/* Mute Button */}
<button
onClick={onMuteToggle}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-[11px] font-bold',
isMuted
? 'bg-blue-500 text-white shadow-md shadow-blue-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title="Mute track"
>
M
</button>
{/* Solo Button */}
{onSoloToggle && (
<button
onClick={onSoloToggle}
className={cn(
'h-8 w-8 rounded-md flex items-center justify-center transition-all text-[11px] font-bold',
isSolo
? 'bg-yellow-500 text-black shadow-md shadow-yellow-500/30'
: 'bg-card hover:bg-accent text-muted-foreground border border-border/50'
)}
title="Solo track"
>
<Headphones className="h-3 w-3" />
</button>
)}
</div>
</div>
</div>
</>
);
}

View File

@@ -0,0 +1,222 @@
'use client';
import * as React from 'react';
import { Plus, ChevronDown, ChevronRight, Sparkles } from 'lucide-react';
import type { Track as TrackType } from '@/types/track';
import { Button } from '@/components/ui/Button';
import { cn } from '@/lib/utils/cn';
import { EffectDevice } from '@/components/effects/EffectDevice';
import { EffectBrowser } from '@/components/effects/EffectBrowser';
import type { EffectType } from '@/lib/audio/effects/chain';
export interface TrackExtensionsProps {
track: TrackType;
onUpdateTrack: (trackId: string, updates: Partial<TrackType>) => void;
onToggleEffect?: (effectId: string) => void;
onRemoveEffect?: (effectId: string) => void;
onUpdateEffect?: (effectId: string, parameters: any) => void;
onAddEffect?: (effectType: EffectType) => void;
asOverlay?: boolean; // When true, renders as full overlay without header
}
export function TrackExtensions({
track,
onUpdateTrack,
onToggleEffect,
onRemoveEffect,
onUpdateEffect,
onAddEffect,
asOverlay = false,
}: TrackExtensionsProps) {
const [effectBrowserOpen, setEffectBrowserOpen] = React.useState(false);
// Don't render if track is collapsed (unless it's an overlay, which handles its own visibility)
if (!asOverlay && track.collapsed) {
return null;
}
// Overlay mode: render full-screen effect rack
if (asOverlay) {
return (
<>
<div className="flex flex-col h-full bg-card/95 rounded-lg border border-border shadow-2xl">
{/* Header with close button */}
<div className="flex items-center justify-between px-4 py-3 border-b border-border bg-muted/50">
<div className="flex items-center gap-2">
<span className="text-sm font-medium">Effects</span>
<span className="text-xs text-muted-foreground">
({track.effectChain.effects.length})
</span>
</div>
<div className="flex items-center gap-2">
<Button
variant="ghost"
size="icon-sm"
onClick={() => setEffectBrowserOpen(true)}
title="Add effect"
>
<Plus className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon-sm"
onClick={() => onUpdateTrack(track.id, { showEffects: false })}
title="Close effects"
>
<ChevronDown className="h-4 w-4" />
</Button>
</div>
</div>
{/* Effects rack */}
<div className="flex-1 overflow-x-auto custom-scrollbar p-4">
<div className="flex h-full gap-4">
{track.effectChain.effects.length === 0 ? (
<div className="flex flex-col items-center justify-center w-full text-center gap-3">
<Sparkles className="h-12 w-12 text-muted-foreground/30" />
<div>
<p className="text-sm text-muted-foreground mb-1">No effects yet</p>
<p className="text-xs text-muted-foreground/70">
Click + to add an effect
</p>
</div>
</div>
) : (
track.effectChain.effects.map((effect) => (
<EffectDevice
key={effect.id}
effect={effect}
onToggleEnabled={() => onToggleEffect?.(effect.id)}
onRemove={() => onRemoveEffect?.(effect.id)}
onUpdateParameters={(params) => onUpdateEffect?.(effect.id, params)}
onToggleExpanded={() => {
const updatedEffects = track.effectChain.effects.map((e) =>
e.id === effect.id ? { ...e, expanded: !e.expanded } : e
);
onUpdateTrack(track.id, {
effectChain: { ...track.effectChain, effects: updatedEffects },
});
}}
/>
))
)}
</div>
</div>
</div>
{/* Effect Browser Dialog */}
<EffectBrowser
open={effectBrowserOpen}
onClose={() => setEffectBrowserOpen(false)}
onSelectEffect={(effectType) => {
if (onAddEffect) {
onAddEffect(effectType);
}
}}
/>
</>
);
}
// Original inline mode
return (
<>
{/* Effects Section (Collapsible, Full Width) */}
<div className="bg-muted/50 border-b border-border/50">
{/* Effects Header - clickable to toggle */}
<div
className="flex items-center gap-2 px-3 py-1.5 cursor-pointer hover:bg-accent/30 transition-colors"
onClick={() => {
onUpdateTrack(track.id, {
showEffects: !track.showEffects,
});
}}
>
{track.showEffects ? (
<ChevronDown className="h-3.5 w-3.5 text-muted-foreground flex-shrink-0" />
) : (
<ChevronRight className="h-3.5 w-3.5 text-muted-foreground flex-shrink-0" />
)}
{/* Show mini effect chain when collapsed */}
{!track.showEffects && track.effectChain.effects.length > 0 ? (
<div className="flex-1 flex items-center gap-1 overflow-x-auto custom-scrollbar">
{track.effectChain.effects.map((effect) => (
<div
key={effect.id}
className={cn(
'px-2 py-0.5 rounded text-[10px] font-medium flex-shrink-0',
effect.enabled
? 'bg-primary/20 text-primary border border-primary/30'
: 'bg-muted/30 text-muted-foreground border border-border'
)}
>
{effect.name}
</div>
))}
</div>
) : (
<span className="text-xs font-medium text-muted-foreground">
Devices ({track.effectChain.effects.length})
</span>
)}
<Button
variant="ghost"
size="icon-sm"
onClick={(e) => {
e.stopPropagation();
setEffectBrowserOpen(true);
}}
title="Add effect"
className="h-5 w-5 flex-shrink-0"
>
<Plus className="h-3 w-3" />
</Button>
</div>
{/* Horizontal scrolling device rack - expanded state */}
{track.showEffects && (
<div className="h-48 overflow-x-auto custom-scrollbar bg-muted/70 p-3">
<div className="flex h-full gap-3">
{track.effectChain.effects.length === 0 ? (
<div className="text-xs text-muted-foreground text-center py-8 w-full">
No devices. Click + to add an effect.
</div>
) : (
track.effectChain.effects.map((effect) => (
<EffectDevice
key={effect.id}
effect={effect}
onToggleEnabled={() => onToggleEffect?.(effect.id)}
onRemove={() => onRemoveEffect?.(effect.id)}
onUpdateParameters={(params) => onUpdateEffect?.(effect.id, params)}
onToggleExpanded={() => {
const updatedEffects = track.effectChain.effects.map((e) =>
e.id === effect.id ? { ...e, expanded: !e.expanded } : e
);
onUpdateTrack(track.id, {
effectChain: { ...track.effectChain, effects: updatedEffects },
});
}}
/>
))
)}
</div>
</div>
)}
</div>
{/* Effect Browser Dialog */}
<EffectBrowser
open={effectBrowserOpen}
onClose={() => setEffectBrowserOpen(false)}
onSelectEffect={(effectType) => {
if (onAddEffect) {
onAddEffect(effectType);
}
}}
/>
</>
);
}

View File

@@ -0,0 +1,246 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface TrackFaderProps {
value: number;
peakLevel: number;
rmsLevel: number;
onChange: (value: number) => void;
onTouchStart?: () => void;
onTouchEnd?: () => void;
className?: string;
}
export function TrackFader({
value,
peakLevel,
rmsLevel,
onChange,
onTouchStart,
onTouchEnd,
className,
}: TrackFaderProps) {
const [isDragging, setIsDragging] = React.useState(false);
const containerRef = React.useRef<HTMLDivElement>(null);
// Convert linear 0-1 to dB scale for display
const linearToDb = (linear: number): number => {
if (linear === 0) return -60;
const db = 20 * Math.log10(linear);
return Math.max(-60, Math.min(0, db));
};
const valueDb = linearToDb(value);
const peakDb = linearToDb(peakLevel);
const rmsDb = linearToDb(rmsLevel);
// Calculate bar widths (0-100%)
const peakWidth = ((peakDb + 60) / 60) * 100;
const rmsWidth = ((rmsDb + 60) / 60) * 100;
const handleMouseDown = (e: React.MouseEvent) => {
e.preventDefault();
setIsDragging(true);
onTouchStart?.();
updateValue(e.clientY);
};
const handleMouseMove = React.useCallback(
(e: MouseEvent) => {
if (!isDragging) return;
updateValue(e.clientY);
},
[isDragging]
);
const handleMouseUp = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
const handleTouchStart = (e: React.TouchEvent) => {
e.preventDefault();
const touch = e.touches[0];
setIsDragging(true);
onTouchStart?.();
updateValue(touch.clientY);
};
const handleTouchMove = React.useCallback(
(e: TouchEvent) => {
if (!isDragging || e.touches.length === 0) return;
const touch = e.touches[0];
updateValue(touch.clientY);
},
[isDragging]
);
const handleTouchEnd = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
const updateValue = (clientY: number) => {
if (!containerRef.current) return;
const rect = containerRef.current.getBoundingClientRect();
const y = clientY - rect.top;
// Track has 32px (2rem) padding on top and bottom (top-8 bottom-8)
const trackPadding = 32;
const trackHeight = rect.height - (trackPadding * 2);
// Clamp y to track bounds
const clampedY = Math.max(trackPadding, Math.min(rect.height - trackPadding, y));
// Inverted: top = max (1), bottom = min (0)
// Map clampedY from [trackPadding, height-trackPadding] to [1, 0]
const percentage = 1 - ((clampedY - trackPadding) / trackHeight);
onChange(Math.max(0, Math.min(1, percentage)));
};
React.useEffect(() => {
if (isDragging) {
window.addEventListener('mousemove', handleMouseMove);
window.addEventListener('mouseup', handleMouseUp);
window.addEventListener('touchmove', handleTouchMove);
window.addEventListener('touchend', handleTouchEnd);
return () => {
window.removeEventListener('mousemove', handleMouseMove);
window.removeEventListener('mouseup', handleMouseUp);
window.removeEventListener('touchmove', handleTouchMove);
window.removeEventListener('touchend', handleTouchEnd);
};
}
}, [isDragging, handleMouseMove, handleMouseUp, handleTouchMove, handleTouchEnd]);
return (
<div className={cn('flex gap-3', className)} style={{ marginLeft: '16px' }}>
{/* dB Labels (Left) */}
<div className="flex flex-col justify-between text-[10px] font-mono text-muted-foreground py-1">
<span>0</span>
<span>-12</span>
<span>-24</span>
<span>-60</span>
</div>
{/* Fader Container */}
<div
ref={containerRef}
className="relative w-12 h-40 bg-background/50 rounded-md border border-border/50 cursor-pointer"
onMouseDown={handleMouseDown}
onTouchStart={handleTouchStart}
>
{/* Peak Meter (Horizontal Bar - Top) */}
<div className="absolute inset-x-2 top-2 h-3 bg-background/80 rounded-sm overflow-hidden border border-border/30">
<div
className="absolute left-0 top-0 bottom-0 transition-all duration-75 ease-out"
style={{ width: `${Math.max(0, Math.min(100, peakWidth))}%` }}
>
<div className={cn(
'w-full h-full',
peakDb > -3 ? 'bg-red-500' :
peakDb > -6 ? 'bg-yellow-500' :
'bg-green-500'
)} />
</div>
</div>
{/* RMS Meter (Horizontal Bar - Bottom) */}
<div className="absolute inset-x-2 bottom-2 h-3 bg-background/80 rounded-sm overflow-hidden border border-border/30">
<div
className="absolute left-0 top-0 bottom-0 transition-all duration-150 ease-out"
style={{ width: `${Math.max(0, Math.min(100, rmsWidth))}%` }}
>
<div className={cn(
'w-full h-full',
rmsDb > -3 ? 'bg-red-500' :
rmsDb > -6 ? 'bg-yellow-500' :
'bg-green-500'
)} />
</div>
</div>
{/* Fader Track */}
<div className="absolute top-8 bottom-8 left-1/2 -translate-x-1/2 w-1.5 bg-muted/50 rounded-full" />
{/* Fader Handle */}
<div
className="absolute left-1/2 -translate-x-1/2 w-10 h-4 bg-primary/80 border-2 border-primary rounded-md shadow-lg cursor-grab active:cursor-grabbing pointer-events-none transition-all"
style={{
// Inverted: value 1 = top of track (20%), value 0 = bottom of track (80%)
// Track has top-8 bottom-8 padding (20% and 80% of h-40 container)
// Handle moves within 60% range (from 20% to 80%)
top: `calc(${20 + (1 - value) * 60}% - 0.5rem)`,
}}
>
{/* Handle grip lines */}
<div className="absolute inset-0 flex items-center justify-center gap-0.5">
<div className="h-2 w-px bg-primary-foreground/30" />
<div className="h-2 w-px bg-primary-foreground/30" />
<div className="h-2 w-px bg-primary-foreground/30" />
</div>
</div>
{/* dB Scale Markers */}
<div className="absolute inset-0 px-2 py-8 pointer-events-none">
<div className="relative h-full">
{/* -12 dB */}
<div className="absolute left-0 right-0 h-px bg-border/20" style={{ top: '50%' }} />
{/* -6 dB */}
<div className="absolute left-0 right-0 h-px bg-yellow-500/20" style={{ top: '20%' }} />
{/* -3 dB */}
<div className="absolute left-0 right-0 h-px bg-red-500/30" style={{ top: '10%' }} />
</div>
</div>
</div>
{/* Value and Level Display (Right) */}
<div className="flex flex-col justify-between items-start text-[9px] font-mono py-1 w-[36px]">
{/* Current dB Value */}
<div className={cn(
'font-bold text-[11px]',
valueDb > -3 ? 'text-red-500' :
valueDb > -6 ? 'text-yellow-500' :
'text-green-500'
)}>
{valueDb > -60 ? `${valueDb.toFixed(1)}` : '-∞'}
</div>
{/* Spacer */}
<div className="flex-1" />
{/* Peak Level */}
<div className="flex flex-col items-start">
<span className="text-muted-foreground/60">PK</span>
<span className={cn(
'font-mono text-[10px]',
peakDb > -3 ? 'text-red-500' :
peakDb > -6 ? 'text-yellow-500' :
'text-green-500'
)}>
{peakDb > -60 ? `${peakDb.toFixed(1)}` : '-∞'}
</span>
</div>
{/* RMS Level */}
<div className="flex flex-col items-start">
<span className="text-muted-foreground/60">RM</span>
<span className={cn(
'font-mono text-[10px]',
rmsDb > -3 ? 'text-red-500' :
rmsDb > -6 ? 'text-yellow-500' :
'text-green-500'
)}>
{rmsDb > -60 ? `${rmsDb.toFixed(1)}` : '-∞'}
</span>
</div>
{/* dB Label */}
<span className="text-muted-foreground/60 text-[8px]">dB</span>
</div>
</div>
);
}

View File

@@ -0,0 +1,180 @@
'use client';
import * as React from 'react';
import { Volume2, VolumeX, Headphones, Mic, X, ChevronDown, ChevronRight } from 'lucide-react';
import { Button } from '@/components/ui/Button';
import { Slider } from '@/components/ui/Slider';
import type { Track } from '@/types/track';
import { cn } from '@/lib/utils/cn';
export interface TrackHeaderProps {
track: Track;
onToggleMute: () => void;
onToggleSolo: () => void;
onToggleCollapse: () => void;
onVolumeChange: (volume: number) => void;
onPanChange: (pan: number) => void;
onRemove: () => void;
onNameChange: (name: string) => void;
}
export function TrackHeader({
track,
onToggleMute,
onToggleSolo,
onToggleCollapse,
onVolumeChange,
onPanChange,
onRemove,
onNameChange,
}: TrackHeaderProps) {
const [isEditingName, setIsEditingName] = React.useState(false);
const [nameInput, setNameInput] = React.useState(String(track.name || 'Untitled Track'));
const inputRef = React.useRef<HTMLInputElement>(null);
const handleNameClick = () => {
setIsEditingName(true);
setNameInput(String(track.name || 'Untitled Track'));
};
const handleNameBlur = () => {
setIsEditingName(false);
if (nameInput.trim()) {
onNameChange(nameInput.trim());
} else {
setNameInput(String(track.name || 'Untitled Track'));
}
};
const handleNameKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
inputRef.current?.blur();
} else if (e.key === 'Escape') {
setNameInput(String(track.name || 'Untitled Track'));
setIsEditingName(false);
}
};
React.useEffect(() => {
if (isEditingName && inputRef.current) {
inputRef.current.focus();
inputRef.current.select();
}
}, [isEditingName]);
return (
<div className="flex items-center gap-2 px-3 py-2 border-b border-border bg-card">
{/* Collapse Toggle */}
<Button
variant="ghost"
size="icon-sm"
onClick={onToggleCollapse}
title={track.collapsed ? 'Expand track' : 'Collapse track'}
>
{track.collapsed ? (
<ChevronRight className="h-4 w-4" />
) : (
<ChevronDown className="h-4 w-4" />
)}
</Button>
{/* Track Color Indicator */}
<div
className="w-1 h-8 rounded-full"
style={{ backgroundColor: track.color }}
/>
{/* Track Name */}
<div className="flex-1 min-w-0">
{isEditingName ? (
<input
ref={inputRef}
type="text"
value={nameInput}
onChange={(e) => setNameInput(e.target.value)}
onBlur={handleNameBlur}
onKeyDown={handleNameKeyDown}
className="w-full px-2 py-1 text-sm font-medium bg-background border border-border rounded"
/>
) : (
<div
onClick={handleNameClick}
className="px-2 py-1 text-sm font-medium text-foreground truncate cursor-pointer hover:bg-accent rounded"
title={String(track.name || 'Untitled Track')}
>
{String(track.name || 'Untitled Track')}
</div>
)}
</div>
{/* Solo Button */}
<Button
variant={track.solo ? 'secondary' : 'ghost'}
size="icon-sm"
onClick={onToggleSolo}
title="Solo track"
className={cn(track.solo && 'bg-yellow-500/20 hover:bg-yellow-500/30')}
>
<Headphones className={cn('h-4 w-4', track.solo && 'text-yellow-500')} />
</Button>
{/* Mute Button */}
<Button
variant={track.mute ? 'secondary' : 'ghost'}
size="icon-sm"
onClick={onToggleMute}
title="Mute track"
className={cn(track.mute && 'bg-red-500/20 hover:bg-red-500/30')}
>
{track.mute ? (
<VolumeX className="h-4 w-4 text-red-500" />
) : (
<Volume2 className="h-4 w-4" />
)}
</Button>
{/* Volume Slider */}
<div className="flex items-center gap-2 w-24">
<Slider
value={[track.volume]}
onValueChange={([value]) => onVolumeChange(value)}
min={0}
max={1}
step={0.01}
className="flex-1"
title={`Volume: ${Math.round(track.volume * 100)}%`}
/>
<span className="text-xs text-muted-foreground w-8 text-right">
{Math.round(track.volume * 100)}
</span>
</div>
{/* Pan Knob (simplified as slider) */}
<div className="flex items-center gap-2 w-20">
<Slider
value={[track.pan]}
onValueChange={([value]) => onPanChange(value)}
min={-1}
max={1}
step={0.01}
className="flex-1"
title={`Pan: ${track.pan === 0 ? 'C' : track.pan < 0 ? `L${Math.abs(Math.round(track.pan * 100))}` : `R${Math.round(track.pan * 100)}`}`}
/>
<span className="text-xs text-muted-foreground w-6 text-right">
{track.pan === 0 ? 'C' : track.pan < 0 ? `L` : 'R'}
</span>
</div>
{/* Remove Button */}
<Button
variant="ghost"
size="icon-sm"
onClick={onRemove}
title="Remove track"
className="text-destructive hover:text-destructive hover:bg-destructive/10"
>
<X className="h-4 w-4" />
</Button>
</div>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ import { cn } from '@/lib/utils/cn';
export interface ButtonProps export interface ButtonProps
extends React.ButtonHTMLAttributes<HTMLButtonElement> { extends React.ButtonHTMLAttributes<HTMLButtonElement> {
variant?: 'default' | 'destructive' | 'outline' | 'secondary' | 'ghost' | 'link'; variant?: 'default' | 'destructive' | 'outline' | 'secondary' | 'ghost' | 'link';
size?: 'default' | 'sm' | 'lg' | 'icon'; size?: 'default' | 'sm' | 'lg' | 'icon' | 'icon-sm';
} }
const Button = React.forwardRef<HTMLButtonElement, ButtonProps>( const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
@@ -32,6 +32,7 @@ const Button = React.forwardRef<HTMLButtonElement, ButtonProps>(
'h-9 rounded-md px-3': size === 'sm', 'h-9 rounded-md px-3': size === 'sm',
'h-11 rounded-md px-8': size === 'lg', 'h-11 rounded-md px-8': size === 'lg',
'h-10 w-10': size === 'icon', 'h-10 w-10': size === 'icon',
'h-8 w-8': size === 'icon-sm',
}, },
className className
)} )}

View File

@@ -0,0 +1,243 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface CircularKnobProps {
value: number; // -1.0 to 1.0 for pan
onChange: (value: number) => void;
min?: number;
max?: number;
step?: number;
size?: number;
className?: string;
label?: string;
formatValue?: (value: number) => string;
onTouchStart?: () => void;
onTouchEnd?: () => void;
}
export function CircularKnob({
value,
onChange,
min = -1,
max = 1,
step = 0.01,
size = 48,
className,
label,
formatValue,
onTouchStart,
onTouchEnd,
}: CircularKnobProps) {
const knobRef = React.useRef<HTMLDivElement>(null);
const [isDragging, setIsDragging] = React.useState(false);
const dragStartRef = React.useRef({ x: 0, y: 0, value: 0 });
const updateValue = React.useCallback(
(clientX: number, clientY: number) => {
if (!knobRef.current) return;
const rect = knobRef.current.getBoundingClientRect();
const centerX = rect.left + rect.width / 2;
const centerY = rect.top + rect.height / 2;
// Calculate vertical drag distance from start
const deltaY = dragStartRef.current.y - clientY;
const sensitivity = 200; // pixels for full range
const range = max - min;
const delta = (deltaY / sensitivity) * range;
let newValue = dragStartRef.current.value + delta;
// Snap to step
if (step) {
newValue = Math.round(newValue / step) * step;
}
// Clamp to range
newValue = Math.max(min, Math.min(max, newValue));
onChange(newValue);
},
[min, max, step, onChange]
);
const handleMouseDown = React.useCallback(
(e: React.MouseEvent) => {
e.preventDefault();
setIsDragging(true);
dragStartRef.current = {
x: e.clientX,
y: e.clientY,
value,
};
onTouchStart?.();
},
[value, onTouchStart]
);
const handleMouseMove = React.useCallback(
(e: MouseEvent) => {
if (isDragging) {
updateValue(e.clientX, e.clientY);
}
},
[isDragging, updateValue]
);
const handleMouseUp = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
const handleTouchStart = React.useCallback(
(e: React.TouchEvent) => {
e.preventDefault();
const touch = e.touches[0];
setIsDragging(true);
dragStartRef.current = {
x: touch.clientX,
y: touch.clientY,
value,
};
onTouchStart?.();
},
[value, onTouchStart]
);
const handleTouchMove = React.useCallback(
(e: TouchEvent) => {
if (isDragging && e.touches.length > 0) {
const touch = e.touches[0];
updateValue(touch.clientX, touch.clientY);
}
},
[isDragging, updateValue]
);
const handleTouchEnd = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
React.useEffect(() => {
if (isDragging) {
window.addEventListener('mousemove', handleMouseMove);
window.addEventListener('mouseup', handleMouseUp);
window.addEventListener('touchmove', handleTouchMove);
window.addEventListener('touchend', handleTouchEnd);
return () => {
window.removeEventListener('mousemove', handleMouseMove);
window.removeEventListener('mouseup', handleMouseUp);
window.removeEventListener('touchmove', handleTouchMove);
window.removeEventListener('touchend', handleTouchEnd);
};
}
}, [isDragging, handleMouseMove, handleMouseUp, handleTouchMove, handleTouchEnd]);
// Calculate rotation angle (-135deg to 135deg, 270deg range)
const percentage = (value - min) / (max - min);
const angle = -135 + percentage * 270;
const displayValue = formatValue
? formatValue(value)
: value === 0
? 'C'
: value < 0
? `L${Math.abs(Math.round(value * 100))}`
: `R${Math.round(value * 100)}`;
// Calculate arc parameters for center-based rendering
const isNearCenter = Math.abs(value) < 0.01;
const centerPercentage = 0.5; // Center position (50%)
// Arc goes from center to current value
let arcStartPercentage: number;
let arcLength: number;
if (value < -0.01) {
// Left side: arc from value to center
arcStartPercentage = percentage;
arcLength = centerPercentage - percentage;
} else if (value > 0.01) {
// Right side: arc from center to value
arcStartPercentage = centerPercentage;
arcLength = percentage - centerPercentage;
} else {
// Center: no arc
arcStartPercentage = centerPercentage;
arcLength = 0;
}
return (
<div className={cn('flex flex-col items-center gap-1', className)}>
{label && (
<div className="text-[10px] text-muted-foreground uppercase tracking-wide">
{label}
</div>
)}
<div
ref={knobRef}
onMouseDown={handleMouseDown}
onTouchStart={handleTouchStart}
className="relative cursor-pointer select-none"
style={{ width: size, height: size }}
>
{/* Outer ring */}
<svg
width={size}
height={size}
viewBox={`0 0 ${size} ${size}`}
className="absolute inset-0"
>
{/* Background arc */}
<circle
cx={size / 2}
cy={size / 2}
r={size / 2 - 4}
fill="none"
stroke="currentColor"
strokeWidth="2"
className="text-muted/30"
/>
{/* Value arc - only show when not centered */}
{!isNearCenter && (
<circle
cx={size / 2}
cy={size / 2}
r={size / 2 - 4}
fill="none"
stroke="currentColor"
strokeWidth="3"
strokeLinecap="round"
className="text-primary"
strokeDasharray={`${(arcLength * 270 * Math.PI * (size / 2 - 4)) / 180} ${(Math.PI * 2 * (size / 2 - 4))}`}
transform={`rotate(${-225 + arcStartPercentage * 270} ${size / 2} ${size / 2})`}
/>
)}
</svg>
{/* Knob body */}
<div
className="absolute inset-0 rounded-full bg-card border-2 border-border shadow-sm flex items-center justify-center transition-transform hover:scale-105 active:scale-95"
style={{
transform: `rotate(${angle}deg)`,
margin: '4px',
}}
>
{/* Indicator line */}
<div className="absolute top-1 left-1/2 w-0.5 h-2 bg-primary rounded-full -translate-x-1/2" />
</div>
</div>
{/* Value Display */}
<div className="text-[10px] font-medium text-foreground min-w-[32px] text-center">
{displayValue}
</div>
</div>
);
}

View File

@@ -0,0 +1,195 @@
'use client';
import * as React from 'react';
import { Command } from 'lucide-react';
import { cn } from '@/lib/utils/cn';
export interface CommandAction {
id: string;
label: string;
description?: string;
shortcut?: string;
category: 'edit' | 'playback' | 'file' | 'view' | 'effects' | 'tracks';
action: () => void;
}
export interface CommandPaletteProps {
actions: CommandAction[];
className?: string;
}
export function CommandPalette({ actions, className }: CommandPaletteProps) {
const [isOpen, setIsOpen] = React.useState(false);
const [search, setSearch] = React.useState('');
const [selectedIndex, setSelectedIndex] = React.useState(0);
const inputRef = React.useRef<HTMLInputElement>(null);
const filteredActions = React.useMemo(() => {
if (!search) return actions;
const query = search.toLowerCase();
return actions.filter(
(action) =>
action.label.toLowerCase().includes(query) ||
action.description?.toLowerCase().includes(query) ||
action.category.toLowerCase().includes(query)
);
}, [actions, search]);
const groupedActions = React.useMemo(() => {
const groups: Record<string, CommandAction[]> = {};
filteredActions.forEach((action) => {
if (!groups[action.category]) {
groups[action.category] = [];
}
groups[action.category].push(action);
});
return groups;
}, [filteredActions]);
React.useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
// Ctrl+K or Cmd+K to open
if ((e.ctrlKey || e.metaKey) && e.key === 'k') {
e.preventDefault();
setIsOpen(true);
}
// Escape to close
if (e.key === 'Escape') {
setIsOpen(false);
setSearch('');
setSelectedIndex(0);
}
};
window.addEventListener('keydown', handleKeyDown);
return () => window.removeEventListener('keydown', handleKeyDown);
}, []);
React.useEffect(() => {
if (isOpen && inputRef.current) {
inputRef.current.focus();
}
}, [isOpen]);
const handleKeyDown = (e: React.KeyboardEvent) => {
if (e.key === 'ArrowDown') {
e.preventDefault();
setSelectedIndex((prev) => Math.min(prev + 1, filteredActions.length - 1));
} else if (e.key === 'ArrowUp') {
e.preventDefault();
setSelectedIndex((prev) => Math.max(prev - 1, 0));
} else if (e.key === 'Enter') {
e.preventDefault();
if (filteredActions[selectedIndex]) {
filteredActions[selectedIndex].action();
setIsOpen(false);
setSearch('');
setSelectedIndex(0);
}
}
};
const executeAction = (action: CommandAction) => {
action.action();
setIsOpen(false);
setSearch('');
setSelectedIndex(0);
};
if (!isOpen) {
return (
<button
onClick={() => setIsOpen(true)}
className={cn(
'h-9 w-9 rounded-md',
'inline-flex items-center justify-center',
'hover:bg-accent hover:text-accent-foreground',
'transition-colors',
className
)}
title="Command Palette (Ctrl+K)"
>
<Command className="h-5 w-5" />
</button>
);
}
return (
<div className="fixed inset-0 z-50 flex items-start justify-center p-4 sm:p-8 bg-black/50 backdrop-blur-sm">
<div
className={cn(
'w-full max-w-2xl mt-20 bg-card rounded-lg border-2 border-border shadow-2xl',
'animate-slideInFromTop',
className
)}
>
{/* Search Input */}
<div className="flex items-center gap-3 p-4 border-b border-border">
<Command className="h-5 w-5 text-muted-foreground" />
<input
ref={inputRef}
type="text"
value={search}
onChange={(e) => {
setSearch(e.target.value);
setSelectedIndex(0);
}}
onKeyDown={handleKeyDown}
placeholder="Type a command or search..."
className="flex-1 bg-transparent border-none outline-none text-foreground placeholder:text-muted-foreground"
/>
<kbd className="px-2 py-1 text-xs bg-muted rounded border border-border">
ESC
</kbd>
</div>
{/* Results */}
<div className="max-h-96 overflow-y-auto custom-scrollbar p-2">
{Object.keys(groupedActions).length === 0 ? (
<div className="p-8 text-center text-muted-foreground text-sm">
No commands found
</div>
) : (
Object.entries(groupedActions).map(([category, categoryActions]) => (
<div key={category} className="mb-4 last:mb-0">
<div className="px-2 py-1 text-xs font-semibold text-muted-foreground uppercase">
{category}
</div>
{categoryActions.map((action, index) => {
const globalIndex = filteredActions.indexOf(action);
return (
<button
key={action.id}
onClick={() => executeAction(action)}
className={cn(
'w-full flex items-center justify-between gap-4 px-3 py-2.5 rounded-md',
'hover:bg-secondary/50 transition-colors text-left',
globalIndex === selectedIndex && 'bg-secondary'
)}
>
<div className="flex-1 min-w-0">
<div className="text-sm font-medium text-foreground">
{action.label}
</div>
{action.description && (
<div className="text-xs text-muted-foreground truncate">
{action.description}
</div>
)}
</div>
{action.shortcut && (
<kbd className="px-2 py-1 text-xs bg-muted rounded border border-border whitespace-nowrap">
{action.shortcut}
</kbd>
)}
</button>
);
})}
</div>
))
)}
</div>
</div>
</div>
);
}

118
components/ui/Modal.tsx Normal file
View File

@@ -0,0 +1,118 @@
'use client';
import * as React from 'react';
import { X } from 'lucide-react';
import { Button } from './Button';
import { cn } from '@/lib/utils/cn';
export interface ModalProps {
open: boolean;
onClose: () => void;
title: string;
description?: string;
children: React.ReactNode;
footer?: React.ReactNode;
size?: 'sm' | 'md' | 'lg' | 'xl';
className?: string;
}
export function Modal({
open,
onClose,
title,
description,
children,
footer,
size = 'md',
className,
}: ModalProps) {
// Close on Escape key
React.useEffect(() => {
const handleEscape = (e: KeyboardEvent) => {
if (e.key === 'Escape' && open) {
onClose();
}
};
if (open) {
document.addEventListener('keydown', handleEscape);
// Prevent body scroll when modal is open
document.body.style.overflow = 'hidden';
}
return () => {
document.removeEventListener('keydown', handleEscape);
document.body.style.overflow = 'unset';
};
}, [open, onClose]);
if (!open) return null;
const sizeClasses = {
sm: 'max-w-sm',
md: 'max-w-md',
lg: 'max-w-lg',
xl: 'max-w-xl',
};
return (
<div className="fixed inset-0 z-50 flex items-center justify-center">
{/* Backdrop */}
<div
className="fixed inset-0 bg-black/50 backdrop-blur-sm"
onClick={onClose}
aria-hidden="true"
/>
{/* Modal */}
<div
className={cn(
'relative w-full bg-card border border-border rounded-lg shadow-lg',
'flex flex-col max-h-[90vh]',
sizeClasses[size],
className
)}
role="dialog"
aria-modal="true"
aria-labelledby="modal-title"
>
{/* Header */}
<div className="flex items-start justify-between p-4 border-b border-border">
<div className="flex-1">
<h2
id="modal-title"
className="text-lg font-semibold text-foreground"
>
{title}
</h2>
{description && (
<p className="mt-1 text-sm text-muted-foreground">
{description}
</p>
)}
</div>
<Button
variant="ghost"
size="icon-sm"
onClick={onClose}
className="ml-2"
>
<X className="h-4 w-4" />
</Button>
</div>
{/* Content */}
<div className="flex-1 overflow-y-auto custom-scrollbar p-4">
{children}
</div>
{/* Footer */}
{footer && (
<div className="flex items-center justify-end gap-2 p-4 border-t border-border">
{footer}
</div>
)}
</div>
</div>
);
}

View File

@@ -4,14 +4,17 @@ import * as React from 'react';
import { cn } from '@/lib/utils/cn'; import { cn } from '@/lib/utils/cn';
export interface SliderProps export interface SliderProps
extends Omit<React.InputHTMLAttributes<HTMLInputElement>, 'onChange' | 'value'> { extends Omit<React.InputHTMLAttributes<HTMLInputElement>, 'onChange' | 'value' | 'onValueChange'> {
value?: number; value?: number | number[];
onChange?: (value: number) => void; onChange?: (value: number) => void;
onValueChange?: (value: number[]) => void;
min?: number; min?: number;
max?: number; max?: number;
step?: number; step?: number;
label?: string; label?: string;
showValue?: boolean; showValue?: boolean;
onTouchStart?: () => void;
onTouchEnd?: () => void;
} }
const Slider = React.forwardRef<HTMLInputElement, SliderProps>( const Slider = React.forwardRef<HTMLInputElement, SliderProps>(
@@ -20,20 +23,43 @@ const Slider = React.forwardRef<HTMLInputElement, SliderProps>(
className, className,
value = 0, value = 0,
onChange, onChange,
onValueChange,
min = 0, min = 0,
max = 100, max = 100,
step = 1, step = 1,
label, label,
showValue = false, showValue = false,
disabled, disabled,
onTouchStart,
onTouchEnd,
...props ...props
}, },
ref ref
) => { ) => {
// Support both value formats (number or number[])
const currentValue = Array.isArray(value) ? value[0] : value;
const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => { const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => {
onChange?.(parseFloat(e.target.value)); const numValue = parseFloat(e.target.value);
onChange?.(numValue);
onValueChange?.([numValue]);
}; };
const handleMouseDown = () => {
onTouchStart?.();
};
const handleMouseUp = () => {
onTouchEnd?.();
};
React.useEffect(() => {
if (onTouchEnd) {
window.addEventListener('mouseup', handleMouseUp);
return () => window.removeEventListener('mouseup', handleMouseUp);
}
}, [onTouchEnd]);
return ( return (
<div className={cn('w-full', className)}> <div className={cn('w-full', className)}>
{(label || showValue) && ( {(label || showValue) && (
@@ -44,7 +70,7 @@ const Slider = React.forwardRef<HTMLInputElement, SliderProps>(
</label> </label>
)} )}
{showValue && ( {showValue && (
<span className="text-sm text-muted-foreground">{value}</span> <span className="text-sm text-muted-foreground">{currentValue}</span>
)} )}
</div> </div>
)} )}
@@ -54,8 +80,9 @@ const Slider = React.forwardRef<HTMLInputElement, SliderProps>(
min={min} min={min}
max={max} max={max}
step={step} step={step}
value={value} value={currentValue}
onChange={handleChange} onChange={handleChange}
onMouseDown={handleMouseDown}
disabled={disabled} disabled={disabled}
className={cn( className={cn(
'w-full h-2 bg-secondary rounded-lg appearance-none cursor-pointer', 'w-full h-2 bg-secondary rounded-lg appearance-none cursor-pointer',

View File

@@ -100,17 +100,17 @@ function ToastItem({
return ( return (
<div <div
className={cn( className={cn(
'flex items-start gap-3 rounded-lg border p-4 shadow-lg pointer-events-auto', 'flex items-start gap-3 rounded-lg border-2 p-4 shadow-2xl pointer-events-auto',
'animate-slideInFromRight', 'animate-slideInFromRight backdrop-blur-sm',
{ {
'bg-card border-border': variant === 'default', 'bg-card/95 border-border text-foreground': variant === 'default',
'bg-success/10 border-success text-success-foreground': 'bg-success/90 border-success text-white dark:text-white':
variant === 'success', variant === 'success',
'bg-destructive/10 border-destructive text-destructive-foreground': 'bg-destructive/90 border-destructive text-white dark:text-white':
variant === 'error', variant === 'error',
'bg-warning/10 border-warning text-warning-foreground': 'bg-warning/90 border-warning text-black dark:text-black':
variant === 'warning', variant === 'warning',
'bg-info/10 border-info text-info-foreground': variant === 'info', 'bg-info/90 border-info text-white dark:text-white': variant === 'info',
} }
)} )}
> >
@@ -125,7 +125,8 @@ function ToastItem({
</div> </div>
<button <button
onClick={() => onRemove(toast.id)} onClick={() => onRemove(toast.id)}
className="flex-shrink-0 rounded-md p-1 hover:bg-black/10 dark:hover:bg-white/10 transition-colors" className="flex-shrink-0 rounded-md p-1 hover:bg-black/20 dark:hover:bg-white/20 transition-colors"
aria-label="Close"
> >
<X className="h-4 w-4" /> <X className="h-4 w-4" />
</button> </button>

View File

@@ -0,0 +1,165 @@
'use client';
import * as React from 'react';
import { cn } from '@/lib/utils/cn';
export interface VerticalFaderProps {
value: number; // 0.0 to 1.0
level?: number; // 0.0 to 1.0 (for level meter display)
onChange: (value: number) => void;
min?: number;
max?: number;
step?: number;
className?: string;
showDb?: boolean;
onTouchStart?: () => void;
onTouchEnd?: () => void;
}
export function VerticalFader({
value,
level = 0,
onChange,
min = 0,
max = 1,
step = 0.01,
className,
showDb = true,
onTouchStart,
onTouchEnd,
}: VerticalFaderProps) {
const trackRef = React.useRef<HTMLDivElement>(null);
const [isDragging, setIsDragging] = React.useState(false);
const updateValue = React.useCallback(
(clientY: number) => {
if (!trackRef.current) return;
const rect = trackRef.current.getBoundingClientRect();
const height = rect.height;
const y = Math.max(0, Math.min(height, clientY - rect.top));
// Invert Y (top = max, bottom = min)
const percentage = 1 - y / height;
const range = max - min;
let newValue = min + percentage * range;
// Snap to step
if (step) {
newValue = Math.round(newValue / step) * step;
}
// Clamp to range
newValue = Math.max(min, Math.min(max, newValue));
onChange(newValue);
},
[min, max, step, onChange]
);
const handleMouseDown = React.useCallback(
(e: React.MouseEvent) => {
e.preventDefault();
setIsDragging(true);
updateValue(e.clientY);
onTouchStart?.();
},
[updateValue, onTouchStart]
);
const handleMouseMove = React.useCallback(
(e: MouseEvent) => {
if (isDragging) {
updateValue(e.clientY);
}
},
[isDragging, updateValue]
);
const handleMouseUp = React.useCallback(() => {
setIsDragging(false);
onTouchEnd?.();
}, [onTouchEnd]);
React.useEffect(() => {
if (isDragging) {
window.addEventListener('mousemove', handleMouseMove);
window.addEventListener('mouseup', handleMouseUp);
return () => {
window.removeEventListener('mousemove', handleMouseMove);
window.removeEventListener('mouseup', handleMouseUp);
};
}
}, [isDragging, handleMouseMove, handleMouseUp]);
// Convert value to percentage (0-100)
const valuePercentage = ((value - min) / (max - min)) * 100;
// Convert level to dB for display
const db = value === 0 ? -Infinity : 20 * Math.log10(value);
const levelDb = level === 0 ? -Infinity : (level * 60) - 60;
return (
<div className={cn('flex flex-col items-center gap-1', className)}>
{/* dB Display */}
{showDb && (
<div className="text-[10px] font-mono text-muted-foreground min-w-[32px] text-center">
{db === -Infinity ? '-∞' : `${db.toFixed(1)}`}
</div>
)}
{/* Fader Track */}
<div
ref={trackRef}
onMouseDown={handleMouseDown}
className="relative w-8 flex-1 min-h-[80px] max-h-[140px] bg-background/50 border border-border rounded cursor-pointer select-none overflow-hidden"
>
{/* Volume Level Overlay - subtle fill up to fader handle */}
<div
className="absolute bottom-0 left-0 right-0 bg-primary/10"
style={{ height: `${valuePercentage}%` }}
/>
{/* Level Meter (actual level) - capped at fader handle position */}
<div
className="absolute bottom-0 left-0 right-0 transition-all duration-75"
style={{
height: `${Math.min(level * 100, valuePercentage)}%`,
background: 'linear-gradient(to top, rgb(34, 197, 94) 0%, rgb(34, 197, 94) 70%, rgb(234, 179, 8) 85%, rgb(239, 68, 68) 100%)',
opacity: 0.6,
}}
/>
{/* Volume Value Fill - Removed to show gradient spectrum */}
{/* Fader Handle */}
<div
className="absolute left-0 right-0 h-3 -ml-1 -mr-1 bg-primary/70 border-2 border-primary rounded-sm shadow-lg cursor-grab active:cursor-grabbing backdrop-blur-sm"
style={{
bottom: `calc(${valuePercentage}% - 6px)`,
width: 'calc(100% + 8px)',
}}
/>
{/* Scale Marks */}
<div className="absolute inset-0 pointer-events-none">
{[0.25, 0.5, 0.75].map((mark) => (
<div
key={mark}
className="absolute left-0 right-0 h-px bg-background/50"
style={{ bottom: `${mark * 100}%` }}
/>
))}
</div>
</div>
{/* Level dB Display */}
{showDb && (
<div className="text-[10px] font-mono text-muted-foreground min-w-[32px] text-center">
{levelDb === -Infinity ? '-∞' : `${levelDb.toFixed(0)}`}
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,233 @@
/**
* Automation utility functions for creating and manipulating automation data
*/
import type {
AutomationLane,
AutomationPoint,
AutomationCurveType,
AutomationMode,
CreateAutomationPointInput,
} from '@/types/automation';
/**
* Generate unique automation point ID
*/
export function generateAutomationPointId(): string {
return `autopoint-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Generate unique automation lane ID
*/
export function generateAutomationLaneId(): string {
return `autolane-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Create a new automation point
*/
export function createAutomationPoint(
input: CreateAutomationPointInput
): AutomationPoint {
return {
id: generateAutomationPointId(),
...input,
};
}
/**
* Create a new automation lane
*/
export function createAutomationLane(
trackId: string,
parameterId: string,
parameterName: string,
valueRange: {
min: number;
max: number;
unit?: string;
formatter?: (value: number) => string;
}
): AutomationLane {
return {
id: generateAutomationLaneId(),
trackId,
parameterId,
parameterName,
visible: true,
height: 80,
points: [],
mode: 'read',
valueRange,
};
}
/**
* Linear interpolation between two values
*/
function lerp(a: number, b: number, t: number): number {
return a + (b - a) * t;
}
/**
* Evaluate automation value at a specific time using linear interpolation
*/
export function evaluateAutomationLinear(
points: AutomationPoint[],
time: number
): number {
if (points.length === 0) return 0.5; // Default middle value
if (points.length === 1) return points[0].value;
// Sort points by time (should already be sorted, but ensure it)
const sortedPoints = [...points].sort((a, b) => a.time - b.time);
// Before first point
if (time <= sortedPoints[0].time) {
return sortedPoints[0].value;
}
// After last point
if (time >= sortedPoints[sortedPoints.length - 1].time) {
return sortedPoints[sortedPoints.length - 1].value;
}
// Find surrounding points
for (let i = 0; i < sortedPoints.length - 1; i++) {
const p1 = sortedPoints[i];
const p2 = sortedPoints[i + 1];
if (time >= p1.time && time <= p2.time) {
// Handle step curve
if (p1.curve === 'step') {
return p1.value;
}
// Linear interpolation
const t = (time - p1.time) / (p2.time - p1.time);
return lerp(p1.value, p2.value, t);
}
}
return sortedPoints[sortedPoints.length - 1].value;
}
/**
* Add an automation point to a lane, maintaining time-sorted order
*/
export function addAutomationPoint(
lane: AutomationLane,
point: CreateAutomationPointInput
): AutomationLane {
const newPoint = createAutomationPoint(point);
const points = [...lane.points, newPoint].sort((a, b) => a.time - b.time);
return {
...lane,
points,
};
}
/**
* Remove an automation point by ID
*/
export function removeAutomationPoint(
lane: AutomationLane,
pointId: string
): AutomationLane {
return {
...lane,
points: lane.points.filter((p) => p.id !== pointId),
};
}
/**
* Update an automation point's time and/or value
*/
export function updateAutomationPoint(
lane: AutomationLane,
pointId: string,
updates: { time?: number; value?: number; curve?: AutomationCurveType }
): AutomationLane {
const points = lane.points.map((p) =>
p.id === pointId ? { ...p, ...updates } : p
);
// Re-sort by time if time was updated
if (updates.time !== undefined) {
points.sort((a, b) => a.time - b.time);
}
return {
...lane,
points,
};
}
/**
* Remove all automation points in a time range
*/
export function clearAutomationRange(
lane: AutomationLane,
startTime: number,
endTime: number
): AutomationLane {
return {
...lane,
points: lane.points.filter((p) => p.time < startTime || p.time > endTime),
};
}
/**
* Format automation value for display based on lane's value range
*/
export function formatAutomationValue(
lane: AutomationLane,
normalizedValue: number
): string {
const { min, max, unit, formatter } = lane.valueRange;
if (formatter) {
const actualValue = lerp(min, max, normalizedValue);
return formatter(actualValue);
}
const actualValue = lerp(min, max, normalizedValue);
// Format based on unit
if (unit === 'dB') {
// Convert to dB scale
const db = normalizedValue === 0 ? -Infinity : 20 * Math.log10(normalizedValue);
return db === -Infinity ? '-∞ dB' : `${db.toFixed(1)} dB`;
}
if (unit === '%') {
return `${(actualValue * 100).toFixed(0)}%`;
}
if (unit === 'ms') {
return `${actualValue.toFixed(1)} ms`;
}
if (unit === 'Hz') {
return `${actualValue.toFixed(0)} Hz`;
}
// Default: 2 decimal places with unit
return unit ? `${actualValue.toFixed(2)} ${unit}` : actualValue.toFixed(2);
}
/**
* Snap value to grid (useful for user input)
*/
export function snapToGrid(value: number, gridSize: number = 0.25): number {
return Math.round(value / gridSize) * gridSize;
}
/**
* Clamp value between 0 and 1
*/
export function clampNormalized(value: number): number {
return Math.max(0, Math.min(1, value));
}

View File

@@ -0,0 +1,167 @@
/**
* Automation playback engine
* Applies automation to track parameters in real-time during playback
*/
import type { Track } from '@/types/track';
import type { AutomationLane, AutomationValue } from '@/types/automation';
import { interpolateAutomationValue, applyAutomationToTrack } from './utils';
/**
* Get all automation values at a specific time
*/
export function getAutomationValuesAtTime(
track: Track,
time: number
): AutomationValue[] {
if (!track.automation || track.automation.lanes.length === 0) {
return [];
}
const values: AutomationValue[] = [];
for (const lane of track.automation.lanes) {
// Skip lanes in write mode (don't apply during playback)
if (lane.mode === 'write') continue;
// Skip lanes with no points
if (lane.points.length === 0) continue;
const value = interpolateAutomationValue(lane.points, time);
values.push({
parameterId: lane.parameterId,
value,
time,
});
}
return values;
}
/**
* Apply automation values to a track
* Returns a new track object with automated parameters applied
*/
export function applyAutomationValues(
track: Track,
values: AutomationValue[]
): Track {
let updatedTrack = track;
for (const automation of values) {
updatedTrack = applyAutomationToTrack(
updatedTrack,
automation.parameterId,
automation.value
);
}
return updatedTrack;
}
/**
* Apply automation to all tracks at a specific time
* Returns a new tracks array with automation applied
*/
export function applyAutomationToTracks(
tracks: Track[],
time: number
): Track[] {
return tracks.map((track) => {
const automationValues = getAutomationValuesAtTime(track, time);
if (automationValues.length === 0) {
return track;
}
return applyAutomationValues(track, automationValues);
});
}
/**
* Record automation point during playback
*/
export function recordAutomationPoint(
lane: AutomationLane,
time: number,
value: number
): AutomationLane {
// In write mode, replace all existing points in the recorded region
// For simplicity, just add the point for now
// TODO: Implement proper write mode that clears existing points
const newPoint = {
id: `point-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
time,
value,
curve: 'linear' as const,
};
return {
...lane,
points: [...lane.points, newPoint],
};
}
/**
* Automation playback scheduler
* Schedules automation updates at regular intervals during playback
*/
export class AutomationPlaybackScheduler {
private intervalId: number | null = null;
private updateInterval: number = 50; // Update every 50ms (20 Hz)
private onUpdate: ((time: number) => void) | null = null;
/**
* Start the automation scheduler
*/
start(onUpdate: (time: number) => void): void {
if (this.intervalId !== null) {
this.stop();
}
this.onUpdate = onUpdate;
this.intervalId = window.setInterval(() => {
// Get current playback time from your audio engine
// This is a placeholder - you'll need to integrate with your actual playback system
if (this.onUpdate) {
// Call update callback with current time
// The callback should get the time from your actual playback system
this.onUpdate(0); // Placeholder
}
}, this.updateInterval);
}
/**
* Stop the automation scheduler
*/
stop(): void {
if (this.intervalId !== null) {
window.clearInterval(this.intervalId);
this.intervalId = null;
}
this.onUpdate = null;
}
/**
* Set update interval (in milliseconds)
*/
setUpdateInterval(interval: number): void {
this.updateInterval = Math.max(10, Math.min(1000, interval));
// Restart if already running
if (this.intervalId !== null && this.onUpdate) {
const callback = this.onUpdate;
this.stop();
this.start(callback);
}
}
/**
* Check if scheduler is running
*/
isRunning(): boolean {
return this.intervalId !== null;
}
}

View File

@@ -0,0 +1,303 @@
/**
* Automation utility functions
*/
import type {
AutomationLane,
AutomationPoint,
CreateAutomationLaneInput,
CreateAutomationPointInput,
AutomationParameterId,
} from '@/types/automation';
/**
* Generate a unique automation point ID
*/
export function generateAutomationPointId(): string {
return `point-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Generate a unique automation lane ID
*/
export function generateAutomationLaneId(): string {
return `lane-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Create a new automation point
*/
export function createAutomationPoint(
input: CreateAutomationPointInput
): AutomationPoint {
return {
id: generateAutomationPointId(),
...input,
};
}
/**
* Create a new automation lane
*/
export function createAutomationLane(
trackId: string,
parameterId: AutomationParameterId,
parameterName: string,
input?: Partial<CreateAutomationLaneInput>
): AutomationLane {
return {
id: generateAutomationLaneId(),
trackId,
parameterId,
parameterName,
visible: input?.visible ?? true,
height: input?.height ?? 80,
points: input?.points ?? [],
mode: input?.mode ?? 'read',
color: input?.color,
valueRange: input?.valueRange ?? {
min: 0,
max: 1,
},
};
}
/**
* Create a volume automation lane
*/
export function createVolumeAutomationLane(trackId: string): AutomationLane {
return createAutomationLane(trackId, 'volume', 'Volume', {
valueRange: {
min: 0,
max: 1,
formatter: (value) => `${(value * 100).toFixed(0)}%`,
},
color: 'rgb(34, 197, 94)', // green
});
}
/**
* Create a pan automation lane
*/
export function createPanAutomationLane(trackId: string): AutomationLane {
return createAutomationLane(trackId, 'pan', 'Pan', {
valueRange: {
min: -1,
max: 1,
formatter: (value) => {
const normalized = value * 2 - 1; // Convert 0-1 to -1-1
if (normalized === 0) return 'C';
if (normalized < 0) return `L${Math.abs(Math.round(normalized * 100))}`;
return `R${Math.round(normalized * 100)}`;
},
},
color: 'rgb(59, 130, 246)', // blue
});
}
/**
* Interpolate automation value at a specific time
*/
export function interpolateAutomationValue(
points: AutomationPoint[],
time: number
): number {
if (points.length === 0) return 0;
const sortedPoints = [...points].sort((a, b) => a.time - b.time);
// Before first point
if (time <= sortedPoints[0].time) {
return sortedPoints[0].value;
}
// After last point
if (time >= sortedPoints[sortedPoints.length - 1].time) {
return sortedPoints[sortedPoints.length - 1].value;
}
// Find surrounding points
for (let i = 0; i < sortedPoints.length - 1; i++) {
const prevPoint = sortedPoints[i];
const nextPoint = sortedPoints[i + 1];
if (time >= prevPoint.time && time <= nextPoint.time) {
// Handle step curve
if (prevPoint.curve === 'step') {
return prevPoint.value;
}
// Handle bezier curve
if (prevPoint.curve === 'bezier') {
const timeDelta = nextPoint.time - prevPoint.time;
const t = (time - prevPoint.time) / timeDelta;
return interpolateBezier(prevPoint, nextPoint, t);
}
// Linear interpolation (default)
const timeDelta = nextPoint.time - prevPoint.time;
const valueDelta = nextPoint.value - prevPoint.value;
const progress = (time - prevPoint.time) / timeDelta;
return prevPoint.value + valueDelta * progress;
}
}
return 0;
}
/**
* Interpolate value using cubic Bezier curve
* Uses the control handles from both points to create smooth curves
*/
function interpolateBezier(
p0: AutomationPoint,
p1: AutomationPoint,
t: number
): number {
// Default handle positions if not specified
// Out handle defaults to 1/3 towards next point
// In handle defaults to 1/3 back from current point
const timeDelta = p1.time - p0.time;
// Control point 1 (out handle from p0)
const c1x = p0.handleOut?.x ?? timeDelta / 3;
const c1y = p0.handleOut?.y ?? 0;
// Control point 2 (in handle from p1)
const c2x = p1.handleIn?.x ?? -timeDelta / 3;
const c2y = p1.handleIn?.y ?? 0;
// Convert handles to absolute positions
const cp1Value = p0.value + c1y;
const cp2Value = p1.value + c2y;
// Cubic Bezier formula: B(t) = (1-t)³P₀ + 3(1-t)²tP₁ + 3(1-t)t²P₂ + t³P₃
const mt = 1 - t;
const mt2 = mt * mt;
const mt3 = mt2 * mt;
const t2 = t * t;
const t3 = t2 * t;
const value =
mt3 * p0.value +
3 * mt2 * t * cp1Value +
3 * mt * t2 * cp2Value +
t3 * p1.value;
return value;
}
/**
* Create smooth bezier handles for a point based on surrounding points
* This creates an "auto-smooth" effect similar to DAWs
*/
export function createSmoothHandles(
prevPoint: AutomationPoint | null,
currentPoint: AutomationPoint,
nextPoint: AutomationPoint | null
): { handleIn: { x: number; y: number }; handleOut: { x: number; y: number } } {
// If no surrounding points, return horizontal handles
if (!prevPoint && !nextPoint) {
return {
handleIn: { x: -0.1, y: 0 },
handleOut: { x: 0.1, y: 0 },
};
}
// Calculate slope from surrounding points
let slope = 0;
if (prevPoint && nextPoint) {
// Use average slope from both neighbors
const timeDelta = nextPoint.time - prevPoint.time;
const valueDelta = nextPoint.value - prevPoint.value;
slope = valueDelta / timeDelta;
} else if (nextPoint) {
// Only have next point
const timeDelta = nextPoint.time - currentPoint.time;
const valueDelta = nextPoint.value - currentPoint.value;
slope = valueDelta / timeDelta;
} else if (prevPoint) {
// Only have previous point
const timeDelta = currentPoint.time - prevPoint.time;
const valueDelta = currentPoint.value - prevPoint.value;
slope = valueDelta / timeDelta;
}
// Create handles with 1/3 distance to neighbors
const handleDistance = 0.1; // Fixed distance for smooth curves
const handleY = slope * handleDistance;
return {
handleIn: { x: -handleDistance, y: -handleY },
handleOut: { x: handleDistance, y: handleY },
};
}
/**
* Generate points along a bezier curve for rendering
* Returns array of {time, value} points
*/
export function generateBezierCurvePoints(
p0: AutomationPoint,
p1: AutomationPoint,
numPoints: number = 50
): Array<{ time: number; value: number }> {
const points: Array<{ time: number; value: number }> = [];
const timeDelta = p1.time - p0.time;
for (let i = 0; i <= numPoints; i++) {
const t = i / numPoints;
const time = p0.time + t * timeDelta;
const value = interpolateBezier(p0, p1, t);
points.push({ time, value });
}
return points;
}
/**
* Apply automation value to track parameter
*/
export function applyAutomationToTrack(
track: any,
parameterId: AutomationParameterId,
value: number
): any {
if (parameterId === 'volume') {
return { ...track, volume: value };
}
if (parameterId === 'pan') {
// Convert 0-1 to -1-1
return { ...track, pan: value * 2 - 1 };
}
// Effect parameters (format: "effect.{effectId}.{paramName}")
if (parameterId.startsWith('effect.')) {
const parts = parameterId.split('.');
if (parts.length === 3) {
const [, effectId, paramName] = parts;
return {
...track,
effectChain: {
...track.effectChain,
effects: track.effectChain.effects.map((effect: any) =>
effect.id === effectId
? {
...effect,
parameters: {
...effect.parameters,
[paramName]: value,
},
}
: effect
),
},
};
}
}
return track;
}

179
lib/audio/buffer-utils.ts Normal file
View File

@@ -0,0 +1,179 @@
/**
* AudioBuffer manipulation utilities
*/
import { getAudioContext } from './context';
/**
* Extract a portion of an AudioBuffer
*/
export function extractBufferSegment(
buffer: AudioBuffer,
startTime: number,
endTime: number
): AudioBuffer {
const audioContext = getAudioContext();
const startSample = Math.floor(startTime * buffer.sampleRate);
const endSample = Math.floor(endTime * buffer.sampleRate);
const length = endSample - startSample;
const segment = audioContext.createBuffer(
buffer.numberOfChannels,
length,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const sourceData = buffer.getChannelData(channel);
const targetData = segment.getChannelData(channel);
for (let i = 0; i < length; i++) {
targetData[i] = sourceData[startSample + i];
}
}
return segment;
}
/**
* Delete a portion of an AudioBuffer
*/
export function deleteBufferSegment(
buffer: AudioBuffer,
startTime: number,
endTime: number
): AudioBuffer {
const audioContext = getAudioContext();
const startSample = Math.floor(startTime * buffer.sampleRate);
const endSample = Math.floor(endTime * buffer.sampleRate);
const beforeLength = startSample;
const afterLength = buffer.length - endSample;
const newLength = beforeLength + afterLength;
const newBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
newLength,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const sourceData = buffer.getChannelData(channel);
const targetData = newBuffer.getChannelData(channel);
// Copy before segment
for (let i = 0; i < beforeLength; i++) {
targetData[i] = sourceData[i];
}
// Copy after segment
for (let i = 0; i < afterLength; i++) {
targetData[beforeLength + i] = sourceData[endSample + i];
}
}
return newBuffer;
}
/**
* Insert an AudioBuffer at a specific position
*/
export function insertBufferSegment(
buffer: AudioBuffer,
insertBuffer: AudioBuffer,
insertTime: number
): AudioBuffer {
const audioContext = getAudioContext();
const insertSample = Math.floor(insertTime * buffer.sampleRate);
const newLength = buffer.length + insertBuffer.length;
const newBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
newLength,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const sourceData = buffer.getChannelData(channel);
const insertData = insertBuffer.getChannelData(
Math.min(channel, insertBuffer.numberOfChannels - 1)
);
const targetData = newBuffer.getChannelData(channel);
// Copy before insert point
for (let i = 0; i < insertSample; i++) {
targetData[i] = sourceData[i];
}
// Copy insert buffer
for (let i = 0; i < insertBuffer.length; i++) {
targetData[insertSample + i] = insertData[i];
}
// Copy after insert point
for (let i = insertSample; i < buffer.length; i++) {
targetData[insertSample + insertBuffer.length + (i - insertSample)] = sourceData[i];
}
}
return newBuffer;
}
/**
* Trim buffer to selection
*/
export function trimBuffer(
buffer: AudioBuffer,
startTime: number,
endTime: number
): AudioBuffer {
return extractBufferSegment(buffer, startTime, endTime);
}
/**
* Concatenate two audio buffers
*/
export function concatenateBuffers(
buffer1: AudioBuffer,
buffer2: AudioBuffer
): AudioBuffer {
const audioContext = getAudioContext();
const newLength = buffer1.length + buffer2.length;
const channels = Math.max(buffer1.numberOfChannels, buffer2.numberOfChannels);
const newBuffer = audioContext.createBuffer(
channels,
newLength,
buffer1.sampleRate
);
for (let channel = 0; channel < channels; channel++) {
const targetData = newBuffer.getChannelData(channel);
// Copy first buffer
if (channel < buffer1.numberOfChannels) {
const data1 = buffer1.getChannelData(channel);
targetData.set(data1, 0);
}
// Copy second buffer
if (channel < buffer2.numberOfChannels) {
const data2 = buffer2.getChannelData(channel);
targetData.set(data2, buffer1.length);
}
}
return newBuffer;
}
/**
* Duplicate a segment of audio buffer (extract and insert it after the selection)
*/
export function duplicateBufferSegment(
buffer: AudioBuffer,
startTime: number,
endTime: number
): AudioBuffer {
const segment = extractBufferSegment(buffer, startTime, endTime);
return insertBufferSegment(buffer, segment, endTime);
}

View File

@@ -3,22 +3,213 @@
*/ */
import { getAudioContext } from './context'; import { getAudioContext } from './context';
import { checkFileMemoryLimit, type MemoryCheckResult } from '../utils/memory-limits';
export interface ImportOptions {
convertToMono?: boolean;
targetSampleRate?: number; // If specified, resample to this rate
normalizeOnImport?: boolean;
}
export interface AudioFileInfo {
buffer: AudioBuffer;
metadata: AudioMetadata;
}
export interface AudioMetadata {
fileName: string;
fileSize: number;
fileType: string;
duration: number;
sampleRate: number;
channels: number;
bitDepth?: number;
codec?: string;
}
/** /**
* Decode an audio file to AudioBuffer * Decode an audio file to AudioBuffer with optional conversions
*/ */
export async function decodeAudioFile(file: File): Promise<AudioBuffer> { export async function decodeAudioFile(
file: File,
options: ImportOptions = {}
): Promise<AudioBuffer> {
const arrayBuffer = await file.arrayBuffer(); const arrayBuffer = await file.arrayBuffer();
const audioContext = getAudioContext(); const audioContext = getAudioContext();
try { try {
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer); let audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Apply conversions if requested
if (options.convertToMono && audioBuffer.numberOfChannels > 1) {
audioBuffer = convertToMono(audioBuffer);
}
if (options.targetSampleRate && audioBuffer.sampleRate !== options.targetSampleRate) {
audioBuffer = await resampleAudioBuffer(audioBuffer, options.targetSampleRate);
}
if (options.normalizeOnImport) {
audioBuffer = normalizeAudioBuffer(audioBuffer);
}
return audioBuffer; return audioBuffer;
} catch (error) { } catch (error) {
throw new Error(`Failed to decode audio file: ${error}`); throw new Error(`Failed to decode audio file: ${error}`);
} }
} }
/**
* Decode audio file and return both buffer and metadata
*/
export async function importAudioFile(
file: File,
options: ImportOptions = {}
): Promise<AudioFileInfo> {
const audioBuffer = await decodeAudioFile(file, options);
const metadata = extractMetadata(file, audioBuffer);
return {
buffer: audioBuffer,
metadata,
};
}
/**
* Convert stereo (or multi-channel) audio to mono
*/
function convertToMono(audioBuffer: AudioBuffer): AudioBuffer {
const audioContext = getAudioContext();
const numberOfChannels = audioBuffer.numberOfChannels;
if (numberOfChannels === 1) {
return audioBuffer; // Already mono
}
// Create a new mono buffer
const monoBuffer = audioContext.createBuffer(
1,
audioBuffer.length,
audioBuffer.sampleRate
);
const monoData = monoBuffer.getChannelData(0);
// Mix all channels equally
for (let i = 0; i < audioBuffer.length; i++) {
let sum = 0;
for (let channel = 0; channel < numberOfChannels; channel++) {
sum += audioBuffer.getChannelData(channel)[i];
}
monoData[i] = sum / numberOfChannels;
}
return monoBuffer;
}
/**
* Resample audio buffer to a different sample rate
*/
async function resampleAudioBuffer(
audioBuffer: AudioBuffer,
targetSampleRate: number
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
// Create an offline context at the target sample rate
const offlineContext = new OfflineAudioContext(
audioBuffer.numberOfChannels,
Math.ceil(audioBuffer.duration * targetSampleRate),
targetSampleRate
);
// Create a buffer source
const source = offlineContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(offlineContext.destination);
source.start(0);
// Render the audio at the new sample rate
const resampledBuffer = await offlineContext.startRendering();
return resampledBuffer;
}
/**
* Normalize audio buffer to peak amplitude
*/
function normalizeAudioBuffer(audioBuffer: AudioBuffer): AudioBuffer {
const audioContext = getAudioContext();
// Find peak amplitude across all channels
let peak = 0;
for (let channel = 0; channel < audioBuffer.numberOfChannels; channel++) {
const channelData = audioBuffer.getChannelData(channel);
for (let i = 0; i < channelData.length; i++) {
const abs = Math.abs(channelData[i]);
if (abs > peak) peak = abs;
}
}
if (peak === 0 || peak === 1.0) {
return audioBuffer; // Already normalized or silent
}
// Create normalized buffer
const normalizedBuffer = audioContext.createBuffer(
audioBuffer.numberOfChannels,
audioBuffer.length,
audioBuffer.sampleRate
);
// Apply normalization with 1% headroom
const scale = 0.99 / peak;
for (let channel = 0; channel < audioBuffer.numberOfChannels; channel++) {
const inputData = audioBuffer.getChannelData(channel);
const outputData = normalizedBuffer.getChannelData(channel);
for (let i = 0; i < inputData.length; i++) {
outputData[i] = inputData[i] * scale;
}
}
return normalizedBuffer;
}
/**
* Extract metadata from file and audio buffer
*/
function extractMetadata(file: File, audioBuffer: AudioBuffer): AudioMetadata {
// Detect codec from file extension or MIME type
const codec = detectCodec(file);
return {
fileName: file.name,
fileSize: file.size,
fileType: file.type || 'unknown',
duration: audioBuffer.duration,
sampleRate: audioBuffer.sampleRate,
channels: audioBuffer.numberOfChannels,
codec,
};
}
/**
* Detect audio codec from file
*/
function detectCodec(file: File): string {
const ext = file.name.split('.').pop()?.toLowerCase();
const mimeType = file.type.toLowerCase();
if (mimeType.includes('wav') || ext === 'wav') return 'WAV (PCM)';
if (mimeType.includes('mpeg') || mimeType.includes('mp3') || ext === 'mp3') return 'MP3';
if (mimeType.includes('ogg') || ext === 'ogg') return 'OGG Vorbis';
if (mimeType.includes('flac') || ext === 'flac') return 'FLAC';
if (mimeType.includes('m4a') || mimeType.includes('aac') || ext === 'm4a') return 'AAC (M4A)';
if (ext === 'aiff' || ext === 'aif') return 'AIFF';
if (mimeType.includes('webm') || ext === 'webm') return 'WebM Opus';
return 'Unknown';
}
/** /**
* Get audio file metadata without decoding the entire file * Get audio file metadata without decoding the entire file
*/ */
@@ -50,10 +241,21 @@ export function isSupportedAudioFormat(file: File): boolean {
'audio/aac', 'audio/aac',
'audio/m4a', 'audio/m4a',
'audio/x-m4a', 'audio/x-m4a',
'audio/aiff',
'audio/x-aiff',
]; ];
return supportedFormats.includes(file.type) || return supportedFormats.includes(file.type) ||
/\.(wav|mp3|ogg|webm|flac|aac|m4a)$/i.test(file.name); /\.(wav|mp3|ogg|webm|flac|aac|m4a|aiff|aif)$/i.test(file.name);
}
/**
* Check memory requirements for an audio file before decoding
* @param file File to check
* @returns Memory check result with warning if file is large
*/
export function checkAudioFileMemory(file: File): MemoryCheckResult {
return checkFileMemoryLimit(file.size);
} }
/** /**

View File

@@ -0,0 +1,281 @@
/**
* Advanced effects (Pitch Shifter, Time Stretcher, Distortion, Bitcrusher)
*/
import { getAudioContext } from '../context';
export interface PitchShifterParameters {
semitones: number; // -12 to +12 - pitch shift in semitones
cents: number; // -100 to +100 - fine tuning in cents
mix: number; // 0-1 - dry/wet mix
}
export interface TimeStretchParameters {
rate: number; // 0.5-2.0 - playback rate (0.5 = half speed, 2 = double speed)
preservePitch: boolean; // whether to preserve pitch
mix: number; // 0-1 - dry/wet mix
}
export interface DistortionParameters {
drive: number; // 0-1 - amount of distortion
tone: number; // 0-1 - pre-distortion tone control
output: number; // 0-1 - output level
type: 'soft' | 'hard' | 'tube'; // distortion type
mix: number; // 0-1 - dry/wet mix
}
export interface BitcrusherParameters {
bitDepth: number; // 1-16 - bit depth
sampleRate: number; // 100-48000 - sample rate reduction
mix: number; // 0-1 - dry/wet mix
}
/**
* Apply pitch shifting to audio buffer
* Uses simple time-domain pitch shifting (overlap-add)
*/
export async function applyPitchShift(
buffer: AudioBuffer,
params: PitchShifterParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const sampleRate = buffer.sampleRate;
// Calculate pitch shift ratio
const totalCents = params.semitones * 100 + params.cents;
const pitchRatio = Math.pow(2, totalCents / 1200);
// For pitch shifting, we change the playback rate then resample
const newLength = Math.floor(buffer.length / pitchRatio);
const outputBuffer = audioContext.createBuffer(channels, newLength, sampleRate);
// Simple linear interpolation resampling
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < newLength; i++) {
const srcIndex = i * pitchRatio;
const srcIndexInt = Math.floor(srcIndex);
const srcIndexFrac = srcIndex - srcIndexInt;
if (srcIndexInt < buffer.length - 1) {
const sample1 = inputData[srcIndexInt];
const sample2 = inputData[srcIndexInt + 1];
const interpolated = sample1 + (sample2 - sample1) * srcIndexFrac;
// Mix dry/wet
const dry = i < buffer.length ? inputData[i] : 0;
outputData[i] = dry * (1 - params.mix) + interpolated * params.mix;
} else if (srcIndexInt < buffer.length) {
const dry = i < buffer.length ? inputData[i] : 0;
outputData[i] = dry * (1 - params.mix) + inputData[srcIndexInt] * params.mix;
}
}
}
return outputBuffer;
}
/**
* Apply time stretching to audio buffer
* Changes duration without affecting pitch (basic implementation)
*/
export async function applyTimeStretch(
buffer: AudioBuffer,
params: TimeStretchParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const sampleRate = buffer.sampleRate;
if (params.preservePitch) {
// Time stretch with pitch preservation (overlap-add)
const newLength = Math.floor(buffer.length / params.rate);
const outputBuffer = audioContext.createBuffer(channels, newLength, sampleRate);
const windowSize = 2048;
const hopSize = Math.floor(windowSize / 4);
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
let readPos = 0;
let writePos = 0;
while (writePos < newLength) {
// Simple overlap-add
for (let i = 0; i < windowSize && writePos + i < newLength; i++) {
const readIndex = Math.floor(readPos + i);
if (readIndex < buffer.length) {
// Hanning window
const window = 0.5 * (1 - Math.cos((2 * Math.PI * i) / windowSize));
outputData[writePos + i] += inputData[readIndex] * window;
}
}
readPos += hopSize * params.rate;
writePos += hopSize;
}
// Normalize
let maxVal = 0;
for (let i = 0; i < newLength; i++) {
maxVal = Math.max(maxVal, Math.abs(outputData[i]));
}
if (maxVal > 0) {
for (let i = 0; i < newLength; i++) {
outputData[i] /= maxVal;
}
}
}
return outputBuffer;
} else {
// Simple speed change (changes pitch)
const newLength = Math.floor(buffer.length / params.rate);
const outputBuffer = audioContext.createBuffer(channels, newLength, sampleRate);
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < newLength; i++) {
const srcIndex = i * params.rate;
const srcIndexInt = Math.floor(srcIndex);
const srcIndexFrac = srcIndex - srcIndexInt;
if (srcIndexInt < buffer.length - 1) {
const sample1 = inputData[srcIndexInt];
const sample2 = inputData[srcIndexInt + 1];
outputData[i] = sample1 + (sample2 - sample1) * srcIndexFrac;
} else if (srcIndexInt < buffer.length) {
outputData[i] = inputData[srcIndexInt];
}
}
}
return outputBuffer;
}
}
/**
* Apply distortion/overdrive effect
*/
export async function applyDistortion(
buffer: AudioBuffer,
params: DistortionParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Distortion function based on type
const distort = (sample: number, drive: number, type: string): number => {
const x = sample * (1 + drive * 10);
switch (type) {
case 'soft':
// Soft clipping (tanh)
return Math.tanh(x);
case 'hard':
// Hard clipping
return Math.max(-1, Math.min(1, x));
case 'tube':
// Tube-like distortion (asymmetric)
if (x > 0) {
return 1 - Math.exp(-x);
} else {
return -1 + Math.exp(x);
}
default:
return x;
}
};
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Simple low-pass filter for tone control
let filterState = 0;
const filterCutoff = params.tone;
for (let i = 0; i < length; i++) {
let sample = inputData[i];
// Pre-distortion tone filter
filterState = filterState * (1 - filterCutoff) + sample * filterCutoff;
sample = filterState;
// Apply distortion
const distorted = distort(sample, params.drive, params.type);
// Output level
const processed = distorted * params.output;
// Mix dry/wet
outputData[i] = inputData[i] * (1 - params.mix) + processed * params.mix;
}
}
return outputBuffer;
}
/**
* Apply bitcrusher effect
*/
export async function applyBitcrusher(
buffer: AudioBuffer,
params: BitcrusherParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Calculate bit depth quantization step
const bitLevels = Math.pow(2, params.bitDepth);
const step = 2 / bitLevels;
// Calculate sample rate reduction ratio
const srRatio = sampleRate / params.sampleRate;
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
let holdSample = 0;
let holdCounter = 0;
for (let i = 0; i < length; i++) {
// Sample rate reduction (sample and hold)
if (holdCounter <= 0) {
let sample = inputData[i];
// Bit depth reduction
sample = Math.floor(sample / step) * step;
holdSample = sample;
holdCounter = srRatio;
}
holdCounter--;
// Mix dry/wet
outputData[i] = inputData[i] * (1 - params.mix) + holdSample * params.mix;
}
}
return outputBuffer;
}

305
lib/audio/effects/chain.ts Normal file
View File

@@ -0,0 +1,305 @@
/**
* Effect Chain System
* Manages chains of audio effects with bypass, reordering, and preset support
*/
import type {
PitchShifterParameters,
TimeStretchParameters,
DistortionParameters,
BitcrusherParameters,
} from './advanced';
import type {
CompressorParameters,
LimiterParameters,
GateParameters,
} from './dynamics';
import type {
DelayParameters,
ReverbParameters,
ChorusParameters,
FlangerParameters,
PhaserParameters,
} from './time-based';
import type { FilterOptions } from './filters';
// Effect type identifier
export type EffectType =
// Filters
| 'lowpass'
| 'highpass'
| 'bandpass'
| 'notch'
| 'lowshelf'
| 'highshelf'
| 'peaking'
// Dynamics
| 'compressor'
| 'limiter'
| 'gate'
// Time-based
| 'delay'
| 'reverb'
| 'chorus'
| 'flanger'
| 'phaser'
// Advanced
| 'pitch'
| 'timestretch'
| 'distortion'
| 'bitcrusher';
// Union of all effect parameter types
export type EffectParameters =
| FilterOptions
| CompressorParameters
| LimiterParameters
| GateParameters
| DelayParameters
| ReverbParameters
| ChorusParameters
| FlangerParameters
| PhaserParameters
| PitchShifterParameters
| TimeStretchParameters
| DistortionParameters
| BitcrusherParameters
| Record<string, never>; // For effects without parameters
// Effect instance in a chain
export interface ChainEffect {
id: string;
type: EffectType;
name: string;
enabled: boolean;
expanded?: boolean; // UI state for effect device expansion
parameters?: EffectParameters;
}
// Effect chain
export interface EffectChain {
id: string;
name: string;
effects: ChainEffect[];
}
// Effect preset
export interface EffectPreset {
id: string;
name: string;
description?: string;
chain: EffectChain;
createdAt: number;
}
/**
* Generate a unique ID for effects/chains
*/
export function generateId(): string {
return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Create a new effect instance
*/
export function createEffect(
type: EffectType,
name: string,
parameters?: EffectParameters
): ChainEffect {
return {
id: generateId(),
type,
name,
enabled: true,
parameters: parameters || getDefaultParameters(type),
};
}
/**
* Create a new effect chain
*/
export function createEffectChain(name: string = 'New Chain'): EffectChain {
return {
id: generateId(),
name,
effects: [],
};
}
/**
* Add effect to chain
*/
export function addEffectToChain(
chain: EffectChain,
effect: ChainEffect
): EffectChain {
return {
...chain,
effects: [...chain.effects, effect],
};
}
/**
* Remove effect from chain
*/
export function removeEffectFromChain(
chain: EffectChain,
effectId: string
): EffectChain {
return {
...chain,
effects: chain.effects.filter((e) => e.id !== effectId),
};
}
/**
* Toggle effect enabled state
*/
export function toggleEffect(
chain: EffectChain,
effectId: string
): EffectChain {
return {
...chain,
effects: chain.effects.map((e) =>
e.id === effectId ? { ...e, enabled: !e.enabled } : e
),
};
}
/**
* Update effect parameters
*/
export function updateEffectParameters(
chain: EffectChain,
effectId: string,
parameters: EffectParameters
): EffectChain {
return {
...chain,
effects: chain.effects.map((e) =>
e.id === effectId ? { ...e, parameters } : e
),
};
}
/**
* Reorder effects in chain
*/
export function reorderEffects(
chain: EffectChain,
fromIndex: number,
toIndex: number
): EffectChain {
const effects = [...chain.effects];
const [removed] = effects.splice(fromIndex, 1);
effects.splice(toIndex, 0, removed);
return {
...chain,
effects,
};
}
/**
* Create a preset from a chain
*/
export function createPreset(
chain: EffectChain,
name: string,
description?: string
): EffectPreset {
return {
id: generateId(),
name,
description,
chain: JSON.parse(JSON.stringify(chain)), // Deep clone
createdAt: Date.now(),
};
}
/**
* Load preset (returns a new chain)
*/
export function loadPreset(preset: EffectPreset): EffectChain {
return JSON.parse(JSON.stringify(preset.chain)); // Deep clone
}
/**
* Get default parameters for an effect type
*/
export function getDefaultParameters(type: EffectType): EffectParameters {
switch (type) {
// Filters
case 'lowpass':
case 'highpass':
return { frequency: 1000, Q: 1 } as FilterOptions;
case 'bandpass':
case 'notch':
return { frequency: 1000, Q: 1 } as FilterOptions;
case 'lowshelf':
case 'highshelf':
return { frequency: 1000, Q: 1, gain: 0 } as FilterOptions;
case 'peaking':
return { frequency: 1000, Q: 1, gain: 0 } as FilterOptions;
// Dynamics
case 'compressor':
return { threshold: -24, ratio: 4, attack: 0.003, release: 0.25, knee: 30, makeupGain: 0 } as CompressorParameters;
case 'limiter':
return { threshold: -3, attack: 0.001, release: 0.05, makeupGain: 0 } as LimiterParameters;
case 'gate':
return { threshold: -40, ratio: 10, attack: 0.001, release: 0.1, knee: 0 } as GateParameters;
// Time-based
case 'delay':
return { time: 0.5, feedback: 0.3, mix: 0.5 } as DelayParameters;
case 'reverb':
return { roomSize: 0.5, damping: 0.5, mix: 0.3 } as ReverbParameters;
case 'chorus':
return { rate: 1.5, depth: 0.002, mix: 0.5 } as ChorusParameters;
case 'flanger':
return { rate: 0.5, depth: 0.002, feedback: 0.5, mix: 0.5 } as FlangerParameters;
case 'phaser':
return { rate: 0.5, depth: 0.5, stages: 4, mix: 0.5 } as PhaserParameters;
// Advanced
case 'distortion':
return { drive: 0.5, type: 'soft', output: 0.7, mix: 1 } as DistortionParameters;
case 'pitch':
return { semitones: 0, cents: 0, mix: 1 } as PitchShifterParameters;
case 'timestretch':
return { rate: 1.0, preservePitch: false, mix: 1 } as TimeStretchParameters;
case 'bitcrusher':
return { bitDepth: 8, sampleRate: 8000, mix: 1 } as BitcrusherParameters;
default:
return {};
}
}
/**
* Get effect display name
*/
export const EFFECT_NAMES: Record<EffectType, string> = {
lowpass: 'Low-Pass Filter',
highpass: 'High-Pass Filter',
bandpass: 'Band-Pass Filter',
notch: 'Notch Filter',
lowshelf: 'Low Shelf',
highshelf: 'High Shelf',
peaking: 'Peaking EQ',
compressor: 'Compressor',
limiter: 'Limiter',
gate: 'Gate/Expander',
delay: 'Delay/Echo',
reverb: 'Reverb',
chorus: 'Chorus',
flanger: 'Flanger',
phaser: 'Phaser',
pitch: 'Pitch Shifter',
timestretch: 'Time Stretch',
distortion: 'Distortion',
bitcrusher: 'Bitcrusher',
};

View File

@@ -0,0 +1,205 @@
/**
* Dynamics processing effects (Compressor, Limiter, Gate/Expander)
*/
import { getAudioContext } from '../context';
export interface CompressorParameters {
threshold: number; // dB - level where compression starts
ratio: number; // Compression ratio (e.g., 4 = 4:1)
attack: number; // ms - how quickly to compress
release: number; // ms - how quickly to stop compressing
knee: number; // dB - width of soft knee (0 = hard knee)
makeupGain: number; // dB - gain to apply after compression
}
export interface LimiterParameters {
threshold: number; // dB - maximum level
attack: number; // ms - how quickly to limit
release: number; // ms - how quickly to stop limiting
makeupGain: number; // dB - gain to apply after limiting
}
export interface GateParameters {
threshold: number; // dB - level below which gate activates
ratio: number; // Expansion ratio (e.g., 2 = 2:1)
attack: number; // ms - how quickly to close gate
release: number; // ms - how quickly to open gate
knee: number; // dB - width of soft knee
}
/**
* Apply compression to audio buffer
*/
export async function applyCompressor(
buffer: AudioBuffer,
params: CompressorParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Create output buffer
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Convert time constants to samples
const attackSamples = (params.attack / 1000) * sampleRate;
const releaseSamples = (params.release / 1000) * sampleRate;
// Convert dB to linear
const thresholdLinear = dbToLinear(params.threshold);
const makeupGainLinear = dbToLinear(params.makeupGain);
const kneeLinear = dbToLinear(params.knee);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
let envelope = 0;
for (let i = 0; i < length; i++) {
const input = inputData[i];
const inputAbs = Math.abs(input);
// Envelope follower with attack/release
if (inputAbs > envelope) {
envelope = envelope + (inputAbs - envelope) / attackSamples;
} else {
envelope = envelope + (inputAbs - envelope) / releaseSamples;
}
// Calculate gain reduction
let gain = 1.0;
if (envelope > thresholdLinear) {
// Soft knee calculation
const overThreshold = envelope - thresholdLinear;
const kneeRange = kneeLinear / 2;
if (params.knee > 0 && overThreshold < kneeRange) {
// In the knee region - smooth transition
const kneeRatio = overThreshold / kneeRange;
const compressionAmount = (1 - 1 / params.ratio) * kneeRatio;
gain = 1 - compressionAmount * (overThreshold / envelope);
} else {
// Above knee - full compression
const exceededDb = linearToDb(envelope) - params.threshold;
const gainReductionDb = exceededDb * (1 - 1 / params.ratio);
gain = dbToLinear(-gainReductionDb);
}
}
// Apply gain reduction and makeup gain
outputData[i] = input * gain * makeupGainLinear;
}
}
return outputBuffer;
}
/**
* Apply limiting to audio buffer
*/
export async function applyLimiter(
buffer: AudioBuffer,
params: LimiterParameters
): Promise<AudioBuffer> {
// Limiter is essentially a compressor with infinite ratio
return applyCompressor(buffer, {
threshold: params.threshold,
ratio: 100, // Very high ratio approximates infinity:1
attack: params.attack,
release: params.release,
knee: 0, // Hard knee for brick-wall limiting
makeupGain: params.makeupGain,
});
}
/**
* Apply gate/expander to audio buffer
*/
export async function applyGate(
buffer: AudioBuffer,
params: GateParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Create output buffer
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Convert time constants to samples
const attackSamples = (params.attack / 1000) * sampleRate;
const releaseSamples = (params.release / 1000) * sampleRate;
// Convert dB to linear
const thresholdLinear = dbToLinear(params.threshold);
const kneeLinear = dbToLinear(params.knee);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
let envelope = 0;
for (let i = 0; i < length; i++) {
const input = inputData[i];
const inputAbs = Math.abs(input);
// Envelope follower with attack/release
if (inputAbs > envelope) {
envelope = envelope + (inputAbs - envelope) / attackSamples;
} else {
envelope = envelope + (inputAbs - envelope) / releaseSamples;
}
// Calculate gain reduction
let gain = 1.0;
if (envelope < thresholdLinear) {
// Below threshold - apply expansion/gating
const belowThreshold = thresholdLinear - envelope;
const kneeRange = kneeLinear / 2;
if (params.knee > 0 && belowThreshold < kneeRange) {
// In the knee region - smooth transition
const kneeRatio = belowThreshold / kneeRange;
const expansionAmount = (1 - params.ratio) * kneeRatio;
gain = 1 + expansionAmount * (belowThreshold / thresholdLinear);
} else {
// Below knee - full expansion
const belowDb = params.threshold - linearToDb(envelope);
const gainReductionDb = belowDb * (params.ratio - 1);
gain = dbToLinear(-gainReductionDb);
}
// Clamp to prevent extreme amplification
gain = Math.max(0, Math.min(1, gain));
}
// Apply gain
outputData[i] = input * gain;
}
}
return outputBuffer;
}
/**
* Convert decibels to linear gain
*/
function dbToLinear(db: number): number {
return Math.pow(10, db / 20);
}
/**
* Convert linear gain to decibels
*/
function linearToDb(linear: number): number {
return 20 * Math.log10(Math.max(linear, 0.00001)); // Prevent log(0)
}

116
lib/audio/effects/fade.ts Normal file
View File

@@ -0,0 +1,116 @@
/**
* Fade in/out effects
*/
import { getAudioContext } from '../context';
export type FadeType = 'linear' | 'exponential' | 'logarithmic';
/**
* Apply fade in to audio buffer
* @param buffer - Source audio buffer
* @param duration - Fade duration in seconds
* @param type - Fade curve type
* @returns New audio buffer with fade in applied
*/
export function applyFadeIn(
buffer: AudioBuffer,
duration: number,
type: FadeType = 'linear'
): AudioBuffer {
const audioContext = getAudioContext();
const fadeSamples = Math.min(
Math.floor(duration * buffer.sampleRate),
buffer.length
);
const outputBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
if (i < fadeSamples) {
const progress = i / fadeSamples;
const gain = calculateFadeGain(progress, type);
outputData[i] = inputData[i] * gain;
} else {
outputData[i] = inputData[i];
}
}
}
return outputBuffer;
}
/**
* Apply fade out to audio buffer
* @param buffer - Source audio buffer
* @param duration - Fade duration in seconds
* @param type - Fade curve type
* @returns New audio buffer with fade out applied
*/
export function applyFadeOut(
buffer: AudioBuffer,
duration: number,
type: FadeType = 'linear'
): AudioBuffer {
const audioContext = getAudioContext();
const fadeSamples = Math.min(
Math.floor(duration * buffer.sampleRate),
buffer.length
);
const fadeStartSample = buffer.length - fadeSamples;
const outputBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
if (i >= fadeStartSample) {
const progress = (i - fadeStartSample) / fadeSamples;
const gain = calculateFadeGain(1 - progress, type);
outputData[i] = inputData[i] * gain;
} else {
outputData[i] = inputData[i];
}
}
}
return outputBuffer;
}
/**
* Calculate fade gain based on progress and curve type
* @param progress - Progress from 0 to 1
* @param type - Fade curve type
* @returns Gain value from 0 to 1
*/
function calculateFadeGain(progress: number, type: FadeType): number {
switch (type) {
case 'linear':
return progress;
case 'exponential':
// Exponential curve: faster at the start, slower at the end
return progress * progress;
case 'logarithmic':
// Logarithmic curve: slower at the start, faster at the end
return Math.sqrt(progress);
default:
return progress;
}
}

View File

@@ -0,0 +1,168 @@
/**
* Audio filter effects using BiquadFilterNode
*/
import { getAudioContext } from '../context';
export type FilterType = 'lowpass' | 'highpass' | 'bandpass' | 'lowshelf' | 'highshelf' | 'peaking' | 'notch' | 'allpass';
export interface FilterOptions {
type: FilterType;
frequency: number;
Q?: number;
gain?: number;
}
/**
* Apply a filter to an audio buffer using offline audio processing
* @param buffer - Source audio buffer
* @param options - Filter options
* @returns New audio buffer with filter applied
*/
export async function applyFilter(
buffer: AudioBuffer,
options: FilterOptions
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
// Create offline context for processing
const offlineContext = new OfflineAudioContext(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
// Create source from buffer
const source = offlineContext.createBufferSource();
source.buffer = buffer;
// Create and configure filter
const filter = offlineContext.createBiquadFilter();
filter.type = options.type;
filter.frequency.setValueAtTime(options.frequency, offlineContext.currentTime);
if (options.Q !== undefined) {
filter.Q.setValueAtTime(options.Q, offlineContext.currentTime);
}
if (options.gain !== undefined) {
filter.gain.setValueAtTime(options.gain, offlineContext.currentTime);
}
// Connect nodes
source.connect(filter);
filter.connect(offlineContext.destination);
// Start playback and render
source.start(0);
const renderedBuffer = await offlineContext.startRendering();
return renderedBuffer;
}
/**
* Apply low-pass filter (cuts high frequencies)
* @param buffer - Source audio buffer
* @param frequency - Cutoff frequency in Hz (default: 1000)
* @param Q - Quality factor (default: 1.0)
* @returns New audio buffer with filter applied
*/
export async function applyLowPassFilter(
buffer: AudioBuffer,
frequency: number = 1000,
Q: number = 1.0
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'lowpass', frequency, Q });
}
/**
* Apply high-pass filter (cuts low frequencies)
* @param buffer - Source audio buffer
* @param frequency - Cutoff frequency in Hz (default: 100)
* @param Q - Quality factor (default: 1.0)
* @returns New audio buffer with filter applied
*/
export async function applyHighPassFilter(
buffer: AudioBuffer,
frequency: number = 100,
Q: number = 1.0
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'highpass', frequency, Q });
}
/**
* Apply band-pass filter (isolates a frequency range)
* @param buffer - Source audio buffer
* @param frequency - Center frequency in Hz (default: 1000)
* @param Q - Quality factor/bandwidth (default: 1.0)
* @returns New audio buffer with filter applied
*/
export async function applyBandPassFilter(
buffer: AudioBuffer,
frequency: number = 1000,
Q: number = 1.0
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'bandpass', frequency, Q });
}
/**
* Apply notch filter (removes a specific frequency)
* @param buffer - Source audio buffer
* @param frequency - Notch frequency in Hz (default: 1000)
* @param Q - Quality factor/bandwidth (default: 1.0)
* @returns New audio buffer with filter applied
*/
export async function applyNotchFilter(
buffer: AudioBuffer,
frequency: number = 1000,
Q: number = 1.0
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'notch', frequency, Q });
}
/**
* Apply low shelf filter (boost/cut low frequencies)
* @param buffer - Source audio buffer
* @param frequency - Shelf frequency in Hz (default: 200)
* @param gain - Gain in dB (default: 6)
* @returns New audio buffer with filter applied
*/
export async function applyLowShelfFilter(
buffer: AudioBuffer,
frequency: number = 200,
gain: number = 6
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'lowshelf', frequency, gain });
}
/**
* Apply high shelf filter (boost/cut high frequencies)
* @param buffer - Source audio buffer
* @param frequency - Shelf frequency in Hz (default: 3000)
* @param gain - Gain in dB (default: 6)
* @returns New audio buffer with filter applied
*/
export async function applyHighShelfFilter(
buffer: AudioBuffer,
frequency: number = 3000,
gain: number = 6
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'highshelf', frequency, gain });
}
/**
* Apply peaking EQ filter (boost/cut a specific frequency band)
* @param buffer - Source audio buffer
* @param frequency - Center frequency in Hz (default: 1000)
* @param Q - Quality factor/bandwidth (default: 1.0)
* @param gain - Gain in dB (default: 6)
* @returns New audio buffer with filter applied
*/
export async function applyPeakingFilter(
buffer: AudioBuffer,
frequency: number = 1000,
Q: number = 1.0,
gain: number = 6
): Promise<AudioBuffer> {
return applyFilter(buffer, { type: 'peaking', frequency, Q, gain });
}

52
lib/audio/effects/gain.ts Normal file
View File

@@ -0,0 +1,52 @@
/**
* Gain/Volume adjustment effect
*/
import { getAudioContext } from '../context';
/**
* Apply gain to an audio buffer
* @param buffer - Source audio buffer
* @param gainValue - Gain multiplier (1.0 = no change, 0.5 = -6dB, 2.0 = +6dB)
* @returns New audio buffer with gain applied
*/
export function applyGain(buffer: AudioBuffer, gainValue: number): AudioBuffer {
const audioContext = getAudioContext();
const outputBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
// Apply gain to each channel
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
outputData[i] = inputData[i] * gainValue;
// Clamp to prevent distortion
outputData[i] = Math.max(-1, Math.min(1, outputData[i]));
}
}
return outputBuffer;
}
/**
* Convert dB to gain multiplier
* @param db - Decibels
* @returns Gain multiplier
*/
export function dbToGain(db: number): number {
return Math.pow(10, db / 20);
}
/**
* Convert gain multiplier to dB
* @param gain - Gain multiplier
* @returns Decibels
*/
export function gainToDb(gain: number): number {
return 20 * Math.log10(gain);
}

View File

@@ -0,0 +1,132 @@
/**
* Normalization effects
*/
import { getAudioContext } from '../context';
/**
* Normalize audio to peak amplitude
* @param buffer - Source audio buffer
* @param targetPeak - Target peak amplitude (0.0 to 1.0, default 1.0)
* @returns New audio buffer with normalized audio
*/
export function normalizePeak(buffer: AudioBuffer, targetPeak: number = 1.0): AudioBuffer {
const audioContext = getAudioContext();
// Find the absolute peak across all channels
let maxPeak = 0;
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const channelData = buffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
const abs = Math.abs(channelData[i]);
if (abs > maxPeak) {
maxPeak = abs;
}
}
}
// Calculate gain factor
const gainFactor = maxPeak > 0 ? targetPeak / maxPeak : 1.0;
// Create output buffer and apply gain
const outputBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
outputData[i] = inputData[i] * gainFactor;
}
}
return outputBuffer;
}
/**
* Normalize audio to RMS (loudness)
* @param buffer - Source audio buffer
* @param targetRMS - Target RMS level (0.0 to 1.0, default 0.5)
* @returns New audio buffer with normalized audio
*/
export function normalizeRMS(buffer: AudioBuffer, targetRMS: number = 0.5): AudioBuffer {
const audioContext = getAudioContext();
// Calculate RMS across all channels
let sumSquares = 0;
let totalSamples = 0;
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const channelData = buffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
sumSquares += channelData[i] * channelData[i];
totalSamples++;
}
}
const currentRMS = Math.sqrt(sumSquares / totalSamples);
const gainFactor = currentRMS > 0 ? targetRMS / currentRMS : 1.0;
// Create output buffer and apply gain
const outputBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
outputData[i] = inputData[i] * gainFactor;
// Clamp to prevent distortion
outputData[i] = Math.max(-1, Math.min(1, outputData[i]));
}
}
return outputBuffer;
}
/**
* Get peak amplitude of audio buffer
* @param buffer - Audio buffer
* @returns Peak amplitude (0.0 to 1.0)
*/
export function getPeakAmplitude(buffer: AudioBuffer): number {
let maxPeak = 0;
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const channelData = buffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
const abs = Math.abs(channelData[i]);
if (abs > maxPeak) {
maxPeak = abs;
}
}
}
return maxPeak;
}
/**
* Get RMS amplitude of audio buffer
* @param buffer - Audio buffer
* @returns RMS amplitude
*/
export function getRMSAmplitude(buffer: AudioBuffer): number {
let sumSquares = 0;
let totalSamples = 0;
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const channelData = buffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
sumSquares += channelData[i] * channelData[i];
totalSamples++;
}
}
return Math.sqrt(sumSquares / totalSamples);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,31 @@
/**
* Reverse audio effect
*/
import { getAudioContext } from '../context';
/**
* Reverse audio buffer
* @param buffer - Source audio buffer
* @returns New audio buffer with reversed audio
*/
export function reverseAudio(buffer: AudioBuffer): AudioBuffer {
const audioContext = getAudioContext();
const outputBuffer = audioContext.createBuffer(
buffer.numberOfChannels,
buffer.length,
buffer.sampleRate
);
// Reverse each channel
for (let channel = 0; channel < buffer.numberOfChannels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
for (let i = 0; i < buffer.length; i++) {
outputData[i] = inputData[buffer.length - 1 - i];
}
}
return outputBuffer;
}

View File

@@ -0,0 +1,128 @@
/**
* Utilities for applying effects to audio selections
*/
import type { Selection } from '@/types/selection';
import { getAudioContext } from '../context';
/**
* Extract a region from an audio buffer
*/
export function extractRegion(
buffer: AudioBuffer,
startTime: number,
endTime: number
): AudioBuffer {
const audioContext = getAudioContext();
const sampleRate = buffer.sampleRate;
const numberOfChannels = buffer.numberOfChannels;
const startSample = Math.floor(startTime * sampleRate);
const endSample = Math.floor(endTime * sampleRate);
const length = endSample - startSample;
const regionBuffer = audioContext.createBuffer(
numberOfChannels,
length,
sampleRate
);
for (let channel = 0; channel < numberOfChannels; channel++) {
const sourceData = buffer.getChannelData(channel);
const targetData = regionBuffer.getChannelData(channel);
for (let i = 0; i < length; i++) {
targetData[i] = sourceData[startSample + i];
}
}
return regionBuffer;
}
/**
* Replace a region in an audio buffer with processed audio
*/
export function replaceRegion(
originalBuffer: AudioBuffer,
processedRegion: AudioBuffer,
startTime: number
): AudioBuffer {
const audioContext = getAudioContext();
const sampleRate = originalBuffer.sampleRate;
const numberOfChannels = originalBuffer.numberOfChannels;
// Create new buffer with same length as original
const newBuffer = audioContext.createBuffer(
numberOfChannels,
originalBuffer.length,
sampleRate
);
const startSample = Math.floor(startTime * sampleRate);
for (let channel = 0; channel < numberOfChannels; channel++) {
const originalData = originalBuffer.getChannelData(channel);
const processedData = processedRegion.getChannelData(channel);
const newData = newBuffer.getChannelData(channel);
// Copy everything from original
for (let i = 0; i < originalBuffer.length; i++) {
newData[i] = originalData[i];
}
// Replace the selected region with processed data
for (let i = 0; i < processedRegion.length; i++) {
if (startSample + i < newBuffer.length) {
newData[startSample + i] = processedData[i];
}
}
}
return newBuffer;
}
/**
* Apply an effect function to a selection, or entire buffer if no selection
*/
export function applyEffectToSelection(
buffer: AudioBuffer,
selection: Selection | null,
effectFn: (buffer: AudioBuffer) => AudioBuffer
): AudioBuffer {
if (!selection || selection.start === selection.end) {
// No selection, apply to entire buffer
return effectFn(buffer);
}
// Extract the selected region
const region = extractRegion(buffer, selection.start, selection.end);
// Apply effect to the region
const processedRegion = effectFn(region);
// Replace the region in the original buffer
return replaceRegion(buffer, processedRegion, selection.start);
}
/**
* Apply an async effect function to a selection, or entire buffer if no selection
*/
export async function applyAsyncEffectToSelection(
buffer: AudioBuffer,
selection: Selection | null,
effectFn: (buffer: AudioBuffer) => Promise<AudioBuffer>
): Promise<AudioBuffer> {
if (!selection || selection.start === selection.end) {
// No selection, apply to entire buffer
return await effectFn(buffer);
}
// Extract the selected region
const region = extractRegion(buffer, selection.start, selection.end);
// Apply effect to the region
const processedRegion = await effectFn(region);
// Replace the region in the original buffer
return replaceRegion(buffer, processedRegion, selection.start);
}

View File

@@ -0,0 +1,340 @@
/**
* Time-based effects (Delay, Reverb, Chorus, Flanger, Phaser)
*/
import { getAudioContext } from '../context';
export interface DelayParameters {
time: number; // ms - delay time
feedback: number; // 0-1 - amount of delayed signal fed back
mix: number; // 0-1 - dry/wet mix (0 = dry, 1 = wet)
}
export interface ReverbParameters {
roomSize: number; // 0-1 - size of the reverb room
damping: number; // 0-1 - high frequency damping
mix: number; // 0-1 - dry/wet mix
}
export interface ChorusParameters {
rate: number; // Hz - LFO rate
depth: number; // 0-1 - modulation depth
delay: number; // ms - base delay time
mix: number; // 0-1 - dry/wet mix
}
export interface FlangerParameters {
rate: number; // Hz - LFO rate
depth: number; // 0-1 - modulation depth
feedback: number; // 0-1 - feedback amount
delay: number; // ms - base delay time
mix: number; // 0-1 - dry/wet mix
}
export interface PhaserParameters {
rate: number; // Hz - LFO rate
depth: number; // 0-1 - modulation depth
feedback: number; // 0-1 - feedback amount
stages: number; // 2-12 - number of allpass filters
mix: number; // 0-1 - dry/wet mix
}
/**
* Apply delay/echo effect to audio buffer
*/
export async function applyDelay(
buffer: AudioBuffer,
params: DelayParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Calculate delay in samples
const delaySamples = Math.floor((params.time / 1000) * sampleRate);
// Create output buffer (needs extra length for delay tail)
const outputLength = length + delaySamples * 5; // Allow for multiple echoes
const outputBuffer = audioContext.createBuffer(channels, outputLength, sampleRate);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Copy input and add delayed copies with feedback
for (let i = 0; i < outputLength; i++) {
let sample = 0;
// Add original signal
if (i < length) {
sample += inputData[i] * (1 - params.mix);
}
// Add delayed signal with feedback
let delayIndex = i;
let feedbackGain = params.mix;
for (let echo = 0; echo < 10; echo++) {
delayIndex -= delaySamples;
if (delayIndex >= 0 && delayIndex < length) {
sample += inputData[delayIndex] * feedbackGain;
}
feedbackGain *= params.feedback;
if (feedbackGain < 0.001) break; // Stop when feedback is negligible
}
outputData[i] = sample;
}
}
return outputBuffer;
}
/**
* Apply simple algorithmic reverb to audio buffer
*/
export async function applyReverb(
buffer: AudioBuffer,
params: ReverbParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Reverb uses multiple delay lines (Schroeder reverb algorithm)
const combDelays = [1557, 1617, 1491, 1422, 1277, 1356, 1188, 1116].map(
d => Math.floor(d * params.roomSize * (sampleRate / 44100))
);
const allpassDelays = [225, 556, 441, 341].map(
d => Math.floor(d * (sampleRate / 44100))
);
// Create output buffer with reverb tail
const outputLength = length + Math.floor(sampleRate * 3 * params.roomSize);
const outputBuffer = audioContext.createBuffer(channels, outputLength, sampleRate);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Comb filter buffers
const combBuffers = combDelays.map(delay => new Float32Array(delay));
const combIndices = combDelays.map(() => 0);
// Allpass filter buffers
const allpassBuffers = allpassDelays.map(delay => new Float32Array(delay));
const allpassIndices = allpassDelays.map(() => 0);
// Process samples
for (let i = 0; i < outputLength; i++) {
let input = i < length ? inputData[i] : 0;
let combSum = 0;
// Parallel comb filters
for (let c = 0; c < combDelays.length; c++) {
const delayedSample = combBuffers[c][combIndices[c]];
combSum += delayedSample;
// Feedback with damping
const feedback = delayedSample * (0.84 - params.damping * 0.2);
combBuffers[c][combIndices[c]] = input + feedback;
combIndices[c] = (combIndices[c] + 1) % combDelays[c];
}
// Average comb outputs
let sample = combSum / combDelays.length;
// Series allpass filters
for (let a = 0; a < allpassDelays.length; a++) {
const delayed = allpassBuffers[a][allpassIndices[a]];
const output = -sample + delayed;
allpassBuffers[a][allpassIndices[a]] = sample + delayed * 0.5;
sample = output;
allpassIndices[a] = (allpassIndices[a] + 1) % allpassDelays[a];
}
// Mix dry and wet
outputData[i] = input * (1 - params.mix) + sample * params.mix * 0.5;
}
}
return outputBuffer;
}
/**
* Apply chorus effect to audio buffer
*/
export async function applyChorus(
buffer: AudioBuffer,
params: ChorusParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Create output buffer
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Base delay in samples
const baseDelaySamples = (params.delay / 1000) * sampleRate;
const maxDelaySamples = baseDelaySamples + (params.depth * sampleRate * 0.005);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Create delay buffer
const delayBuffer = new Float32Array(Math.ceil(maxDelaySamples) + 1);
let delayIndex = 0;
for (let i = 0; i < length; i++) {
const input = inputData[i];
// Calculate LFO (Low Frequency Oscillator)
const lfoPhase = (i / sampleRate) * params.rate * 2 * Math.PI;
const lfo = Math.sin(lfoPhase);
// Modulated delay time
const modulatedDelay = baseDelaySamples + (lfo * params.depth * sampleRate * 0.005);
// Read from delay buffer with interpolation
const readIndex = (delayIndex - modulatedDelay + delayBuffer.length) % delayBuffer.length;
const readIndexInt = Math.floor(readIndex);
const readIndexFrac = readIndex - readIndexInt;
const sample1 = delayBuffer[readIndexInt];
const sample2 = delayBuffer[(readIndexInt + 1) % delayBuffer.length];
const delayedSample = sample1 + (sample2 - sample1) * readIndexFrac;
// Write to delay buffer
delayBuffer[delayIndex] = input;
delayIndex = (delayIndex + 1) % delayBuffer.length;
// Mix dry and wet
outputData[i] = input * (1 - params.mix) + delayedSample * params.mix;
}
}
return outputBuffer;
}
/**
* Apply flanger effect to audio buffer
*/
export async function applyFlanger(
buffer: AudioBuffer,
params: FlangerParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Create output buffer
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Base delay in samples (shorter than chorus)
const baseDelaySamples = (params.delay / 1000) * sampleRate;
const maxDelaySamples = baseDelaySamples + (params.depth * sampleRate * 0.002);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Create delay buffer
const delayBuffer = new Float32Array(Math.ceil(maxDelaySamples) + 1);
let delayIndex = 0;
for (let i = 0; i < length; i++) {
const input = inputData[i];
// Calculate LFO
const lfoPhase = (i / sampleRate) * params.rate * 2 * Math.PI;
const lfo = Math.sin(lfoPhase);
// Modulated delay time
const modulatedDelay = baseDelaySamples + (lfo * params.depth * sampleRate * 0.002);
// Read from delay buffer with interpolation
const readIndex = (delayIndex - modulatedDelay + delayBuffer.length) % delayBuffer.length;
const readIndexInt = Math.floor(readIndex);
const readIndexFrac = readIndex - readIndexInt;
const sample1 = delayBuffer[readIndexInt];
const sample2 = delayBuffer[(readIndexInt + 1) % delayBuffer.length];
const delayedSample = sample1 + (sample2 - sample1) * readIndexFrac;
// Write to delay buffer with feedback
delayBuffer[delayIndex] = input + delayedSample * params.feedback;
delayIndex = (delayIndex + 1) % delayBuffer.length;
// Mix dry and wet
outputData[i] = input * (1 - params.mix) + delayedSample * params.mix;
}
}
return outputBuffer;
}
/**
* Apply phaser effect to audio buffer
*/
export async function applyPhaser(
buffer: AudioBuffer,
params: PhaserParameters
): Promise<AudioBuffer> {
const audioContext = getAudioContext();
const channels = buffer.numberOfChannels;
const length = buffer.length;
const sampleRate = buffer.sampleRate;
// Create output buffer
const outputBuffer = audioContext.createBuffer(channels, length, sampleRate);
// Process each channel
for (let channel = 0; channel < channels; channel++) {
const inputData = buffer.getChannelData(channel);
const outputData = outputBuffer.getChannelData(channel);
// Allpass filter state for each stage
const stages = Math.floor(params.stages);
const allpassStates = new Array(stages).fill(0);
for (let i = 0; i < length; i++) {
let input = inputData[i];
let output = input;
// Calculate LFO
const lfoPhase = (i / sampleRate) * params.rate * 2 * Math.PI;
const lfo = Math.sin(lfoPhase);
// Modulated allpass frequency (200Hz to 2000Hz)
const baseFreq = 200 + (lfo + 1) * 0.5 * 1800 * params.depth;
const omega = (2 * Math.PI * baseFreq) / sampleRate;
const alpha = (1 - Math.tan(omega / 2)) / (1 + Math.tan(omega / 2));
// Apply cascaded allpass filters
for (let stage = 0; stage < stages; stage++) {
const filtered = alpha * output + allpassStates[stage];
allpassStates[stage] = output - alpha * filtered;
output = filtered;
}
// Add feedback
output = output + output * params.feedback;
// Mix dry and wet
outputData[i] = input * (1 - params.mix) + output * params.mix;
}
}
return outputBuffer;
}

257
lib/audio/export.ts Normal file
View File

@@ -0,0 +1,257 @@
/**
* Audio export utilities
* Supports WAV, MP3, and FLAC export
*/
export interface ExportOptions {
format: 'wav' | 'mp3' | 'flac';
bitDepth?: 16 | 24 | 32; // For WAV and FLAC
sampleRate?: number; // If different from source, will resample
normalize?: boolean; // Normalize to prevent clipping
bitrate?: number; // For MP3 (kbps): 128, 192, 256, 320
quality?: number; // For FLAC compression: 0-9 (0=fast/large, 9=slow/small)
}
/**
* Convert an AudioBuffer to WAV file
*/
export function audioBufferToWav(
audioBuffer: AudioBuffer,
options: ExportOptions = { format: 'wav', bitDepth: 16 }
): ArrayBuffer {
const bitDepth = options.bitDepth ?? 16;
const { normalize } = options;
const numberOfChannels = audioBuffer.numberOfChannels;
const sampleRate = audioBuffer.sampleRate;
const length = audioBuffer.length;
// Get channel data
const channels: Float32Array[] = [];
for (let i = 0; i < numberOfChannels; i++) {
channels.push(audioBuffer.getChannelData(i));
}
// Find peak if normalizing
let peak = 1.0;
if (normalize) {
peak = 0;
for (const channel of channels) {
for (let i = 0; i < channel.length; i++) {
const abs = Math.abs(channel[i]);
if (abs > peak) peak = abs;
}
}
// Prevent division by zero and add headroom
if (peak === 0) peak = 1.0;
else peak = peak * 1.01; // 1% headroom
}
// Calculate sizes
const bytesPerSample = bitDepth / 8;
const blockAlign = numberOfChannels * bytesPerSample;
const dataSize = length * blockAlign;
const bufferSize = 44 + dataSize; // 44 bytes for WAV header
// Create buffer
const buffer = new ArrayBuffer(bufferSize);
const view = new DataView(buffer);
// Write WAV header
let offset = 0;
// RIFF chunk descriptor
writeString(view, offset, 'RIFF'); offset += 4;
view.setUint32(offset, bufferSize - 8, true); offset += 4; // File size - 8
writeString(view, offset, 'WAVE'); offset += 4;
// fmt sub-chunk
writeString(view, offset, 'fmt '); offset += 4;
view.setUint32(offset, 16, true); offset += 4; // Subchunk size (16 for PCM)
view.setUint16(offset, bitDepth === 32 ? 3 : 1, true); offset += 2; // Audio format (1 = PCM, 3 = IEEE float)
view.setUint16(offset, numberOfChannels, true); offset += 2;
view.setUint32(offset, sampleRate, true); offset += 4;
view.setUint32(offset, sampleRate * blockAlign, true); offset += 4; // Byte rate
view.setUint16(offset, blockAlign, true); offset += 2;
view.setUint16(offset, bitDepth, true); offset += 2;
// data sub-chunk
writeString(view, offset, 'data'); offset += 4;
view.setUint32(offset, dataSize, true); offset += 4;
// Write interleaved audio data
if (bitDepth === 16) {
for (let i = 0; i < length; i++) {
for (let channel = 0; channel < numberOfChannels; channel++) {
const sample = Math.max(-1, Math.min(1, channels[channel][i] / peak));
view.setInt16(offset, sample * 0x7fff, true);
offset += 2;
}
}
} else if (bitDepth === 24) {
for (let i = 0; i < length; i++) {
for (let channel = 0; channel < numberOfChannels; channel++) {
const sample = Math.max(-1, Math.min(1, channels[channel][i] / peak));
const int24 = Math.round(sample * 0x7fffff);
view.setUint8(offset, int24 & 0xff); offset++;
view.setUint8(offset, (int24 >> 8) & 0xff); offset++;
view.setUint8(offset, (int24 >> 16) & 0xff); offset++;
}
}
} else if (bitDepth === 32) {
for (let i = 0; i < length; i++) {
for (let channel = 0; channel < numberOfChannels; channel++) {
const sample = channels[channel][i] / peak;
view.setFloat32(offset, sample, true);
offset += 4;
}
}
}
return buffer;
}
/**
* Download an ArrayBuffer as a file
*/
export function downloadArrayBuffer(
arrayBuffer: ArrayBuffer,
filename: string,
mimeType: string = 'audio/wav'
): void {
const blob = new Blob([arrayBuffer], { type: mimeType });
const url = URL.createObjectURL(blob);
const link = document.createElement('a');
link.href = url;
link.download = filename;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
URL.revokeObjectURL(url);
}
/**
* Convert an AudioBuffer to MP3
*/
export async function audioBufferToMp3(
audioBuffer: AudioBuffer,
options: ExportOptions = { format: 'mp3', bitrate: 192 }
): Promise<ArrayBuffer> {
// Import Mp3Encoder from lamejs
const { Mp3Encoder } = await import('lamejs/src/js/index.js');
const { bitrate = 192, normalize } = options;
const numberOfChannels = Math.min(audioBuffer.numberOfChannels, 2); // MP3 supports max 2 channels
const sampleRate = audioBuffer.sampleRate;
const samples = audioBuffer.length;
// Get channel data
const left = audioBuffer.getChannelData(0);
const right = numberOfChannels > 1 ? audioBuffer.getChannelData(1) : left;
// Find peak if normalizing
let peak = 1.0;
if (normalize) {
peak = 0;
for (let i = 0; i < samples; i++) {
peak = Math.max(peak, Math.abs(left[i]), Math.abs(right[i]));
}
if (peak === 0) peak = 1.0;
else peak = peak * 1.01; // 1% headroom
}
// Convert to 16-bit PCM
const leftPcm = new Int16Array(samples);
const rightPcm = new Int16Array(samples);
for (let i = 0; i < samples; i++) {
leftPcm[i] = Math.max(-32768, Math.min(32767, (left[i] / peak) * 32767));
rightPcm[i] = Math.max(-32768, Math.min(32767, (right[i] / peak) * 32767));
}
// Create MP3 encoder
const mp3encoder = new Mp3Encoder(numberOfChannels, sampleRate, bitrate);
const mp3Data: Int8Array[] = [];
const sampleBlockSize = 1152; // Standard MP3 frame size
// Encode in blocks
for (let i = 0; i < samples; i += sampleBlockSize) {
const leftChunk = leftPcm.subarray(i, Math.min(i + sampleBlockSize, samples));
const rightChunk = numberOfChannels > 1
? rightPcm.subarray(i, Math.min(i + sampleBlockSize, samples))
: leftChunk;
const mp3buf = mp3encoder.encodeBuffer(leftChunk, rightChunk);
if (mp3buf.length > 0) {
mp3Data.push(mp3buf);
}
}
// Flush remaining data
const mp3buf = mp3encoder.flush();
if (mp3buf.length > 0) {
mp3Data.push(mp3buf);
}
// Combine all chunks
const totalLength = mp3Data.reduce((acc, arr) => acc + arr.length, 0);
const result = new Uint8Array(totalLength);
let offset = 0;
for (const chunk of mp3Data) {
result.set(chunk, offset);
offset += chunk.length;
}
return result.buffer;
}
/**
* Convert an AudioBuffer to FLAC
* Note: This is a simplified FLAC encoder using WAV+DEFLATE compression
*/
export async function audioBufferToFlac(
audioBuffer: AudioBuffer,
options: ExportOptions = { format: 'flac', bitDepth: 16 }
): Promise<ArrayBuffer> {
// For true FLAC encoding, we'd need a proper FLAC encoder
// As a workaround, we'll create a compressed WAV using fflate
const fflate = await import('fflate');
const bitDepth = options.bitDepth || 16;
// First create WAV data
const wavBuffer = audioBufferToWav(audioBuffer, {
format: 'wav',
bitDepth,
normalize: options.normalize,
});
// Compress using DEFLATE (similar compression to FLAC but simpler)
const quality = Math.max(0, Math.min(9, options.quality || 6)) as 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9;
const compressed = fflate.zlibSync(new Uint8Array(wavBuffer), { level: quality });
// Create a simple container format
// Format: 'FLAC' (4 bytes) + original size (4 bytes) + compressed data
const result = new Uint8Array(8 + compressed.length);
const view = new DataView(result.buffer);
// Magic bytes
result[0] = 0x66; // 'f'
result[1] = 0x4C; // 'L'
result[2] = 0x41; // 'A'
result[3] = 0x43; // 'C'
// Original size
view.setUint32(4, wavBuffer.byteLength, false);
// Compressed data
result.set(compressed, 8);
return result.buffer;
}
// Helper to write string to DataView
function writeString(view: DataView, offset: number, string: string): void {
for (let i = 0; i < string.length; i++) {
view.setUint8(offset + i, string.charCodeAt(i));
}
}

View File

@@ -40,6 +40,9 @@ export class AudioPlayer {
// Resume audio context if needed // Resume audio context if needed
await resumeAudioContext(); await resumeAudioContext();
// Calculate start offset BEFORE stopping (since stop() resets pauseTime)
const offset = this.isPaused ? this.pauseTime : startOffset;
// Stop any existing playback // Stop any existing playback
this.stop(); this.stop();
@@ -48,8 +51,7 @@ export class AudioPlayer {
this.sourceNode.buffer = this.audioBuffer; this.sourceNode.buffer = this.audioBuffer;
this.sourceNode.connect(this.gainNode); this.sourceNode.connect(this.gainNode);
// Calculate start offset // Set start time
const offset = this.isPaused ? this.pauseTime : startOffset;
this.startTime = this.audioContext.currentTime - offset; this.startTime = this.audioContext.currentTime - offset;
// Start playback // Start playback
@@ -73,8 +75,10 @@ export class AudioPlayer {
pause(): void { pause(): void {
if (!this.isPlaying) return; if (!this.isPlaying) return;
this.pauseTime = this.getCurrentTime(); // Save current time BEFORE calling stop (which resets it)
const savedTime = this.getCurrentTime();
this.stop(); this.stop();
this.pauseTime = savedTime;
this.isPaused = true; this.isPaused = true;
} }
@@ -84,6 +88,8 @@ export class AudioPlayer {
stop(): void { stop(): void {
if (this.sourceNode) { if (this.sourceNode) {
try { try {
// Clear onended callback first to prevent interference
this.sourceNode.onended = null;
this.sourceNode.stop(); this.sourceNode.stop();
} catch (error) { } catch (error) {
// Ignore errors if already stopped // Ignore errors if already stopped
@@ -113,8 +119,10 @@ export class AudioPlayer {
/** /**
* Seek to a specific time * Seek to a specific time
* @param time - Time in seconds to seek to
* @param autoPlay - Whether to automatically start playback after seeking (default: false)
*/ */
async seek(time: number): Promise<void> { async seek(time: number, autoPlay: boolean = false): Promise<void> {
if (!this.audioBuffer) return; if (!this.audioBuffer) return;
const wasPlaying = this.isPlaying; const wasPlaying = this.isPlaying;
@@ -123,7 +131,8 @@ export class AudioPlayer {
this.stop(); this.stop();
this.pauseTime = clampedTime; this.pauseTime = clampedTime;
if (wasPlaying) { // Auto-play if requested, or continue playing if was already playing
if (autoPlay || wasPlaying) {
await this.play(clampedTime); await this.play(clampedTime);
} else { } else {
this.isPaused = true; this.isPaused = true;

164
lib/audio/track-utils.ts Normal file
View File

@@ -0,0 +1,164 @@
/**
* Track utility functions
*/
import type { Track, TrackColor } from '@/types/track';
import { DEFAULT_TRACK_HEIGHT, TRACK_COLORS } from '@/types/track';
import { createEffectChain } from '@/lib/audio/effects/chain';
import { createAutomationLane } from '@/lib/audio/automation-utils';
/**
* Generate a unique track ID
*/
export function generateTrackId(): string {
return `track-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Create a new empty track
*/
export function createTrack(name?: string, color?: TrackColor, height?: number): Track {
const colors: TrackColor[] = ['blue', 'green', 'purple', 'orange', 'pink', 'indigo', 'yellow', 'red'];
const randomColor = colors[Math.floor(Math.random() * colors.length)];
// Ensure name is always a string, handle cases where event objects might be passed
const trackName = typeof name === 'string' && name.trim() ? name.trim() : 'New Track';
const trackId = generateTrackId();
return {
id: trackId,
name: trackName,
color: TRACK_COLORS[color || randomColor],
height: height ?? DEFAULT_TRACK_HEIGHT,
audioBuffer: null,
volume: 0.8,
pan: 0,
mute: false,
solo: false,
recordEnabled: false,
effectChain: createEffectChain(`${trackName} Effects`),
automation: {
lanes: [
createAutomationLane(trackId, 'volume', 'Volume', {
min: 0,
max: 1,
unit: 'dB',
}),
createAutomationLane(trackId, 'pan', 'Pan', {
min: -1,
max: 1,
formatter: (value: number) => {
if (value === 0) return 'C';
if (value < 0) return `${Math.abs(value * 100).toFixed(0)}L`;
return `${(value * 100).toFixed(0)}R`;
},
}),
],
showAutomation: false,
},
collapsed: false,
selected: false,
showEffects: false,
selection: null,
};
}
/**
* Create a track from an audio buffer
*/
export function createTrackFromBuffer(
buffer: AudioBuffer,
name?: string,
color?: TrackColor,
height?: number
): Track {
// Ensure name is a string before passing to createTrack
const trackName = typeof name === 'string' && name.trim() ? name.trim() : undefined;
const track = createTrack(trackName, color, height);
track.audioBuffer = buffer;
return track;
}
/**
* Mix multiple tracks into a single stereo buffer
*/
export function mixTracks(
tracks: Track[],
sampleRate: number,
duration: number
): AudioBuffer {
// Determine the length needed
const length = Math.ceil(duration * sampleRate);
// Create output buffer (stereo)
const offlineContext = new OfflineAudioContext(2, length, sampleRate);
const outputBuffer = offlineContext.createBuffer(2, length, sampleRate);
const leftChannel = outputBuffer.getChannelData(0);
const rightChannel = outputBuffer.getChannelData(1);
// Check if any tracks are soloed
const soloedTracks = tracks.filter((t) => t.solo && !t.mute);
const audibleTracks = soloedTracks.length > 0 ? soloedTracks : tracks.filter((t) => !t.mute);
// Mix each audible track
for (const track of audibleTracks) {
if (!track.audioBuffer) continue;
const buffer = track.audioBuffer;
const trackLength = Math.min(buffer.length, length);
// Get source channels (handle mono/stereo)
const sourceLeft = buffer.getChannelData(0);
const sourceRight = buffer.numberOfChannels > 1 ? buffer.getChannelData(1) : sourceLeft;
// Calculate pan gains (constant power panning)
const panAngle = (track.pan * Math.PI) / 4; // -π/4 to π/4
const leftGain = Math.cos(panAngle + Math.PI / 4) * track.volume;
const rightGain = Math.sin(panAngle + Math.PI / 4) * track.volume;
// Mix into output buffer
for (let i = 0; i < trackLength; i++) {
leftChannel[i] += sourceLeft[i] * leftGain;
rightChannel[i] += sourceRight[i] * rightGain;
}
}
return outputBuffer;
}
/**
* Get the maximum duration across all tracks
*/
export function getMaxTrackDuration(tracks: Track[]): number {
let maxDuration = 0;
for (const track of tracks) {
if (track.audioBuffer) {
const duration = track.audioBuffer.duration;
if (duration > maxDuration) {
maxDuration = duration;
}
}
}
return maxDuration;
}
/**
* Calculate track mix gain (considering solo/mute)
*/
export function getTrackGain(track: Track, allTracks: Track[]): number {
// If track is muted, gain is 0
if (track.mute) return 0;
// Check if any tracks are soloed
const hasSoloedTracks = allTracks.some((t) => t.solo);
// If there are soloed tracks and this track is not soloed, gain is 0
if (hasSoloedTracks && !track.solo) return 0;
// Otherwise, return the track's volume
return track.volume;
}

38
lib/history/command.ts Normal file
View File

@@ -0,0 +1,38 @@
/**
* Command Pattern for Undo/Redo System
*/
export interface Command {
/**
* Execute the command
*/
execute(): void;
/**
* Undo the command
*/
undo(): void;
/**
* Redo the command (default: call execute again)
*/
redo(): void;
/**
* Get a description of the command for UI display
*/
getDescription(): string;
}
/**
* Base command class with default redo implementation
*/
export abstract class BaseCommand implements Command {
abstract execute(): void;
abstract undo(): void;
abstract getDescription(): string;
redo(): void {
this.execute();
}
}

View File

@@ -0,0 +1,155 @@
/**
* Edit commands for audio buffer operations
*/
import { BaseCommand } from '../command';
import type { Selection } from '@/types/selection';
import {
extractBufferSegment,
deleteBufferSegment,
insertBufferSegment,
trimBuffer,
} from '@/lib/audio/buffer-utils';
export type EditCommandType = 'cut' | 'delete' | 'paste' | 'trim';
export interface EditCommandParams {
type: EditCommandType;
beforeBuffer: AudioBuffer;
afterBuffer: AudioBuffer;
selection?: Selection;
clipboardData?: AudioBuffer;
pastePosition?: number;
onApply: (buffer: AudioBuffer) => void;
}
/**
* Command for edit operations (cut, delete, paste, trim)
*/
export class EditCommand extends BaseCommand {
private type: EditCommandType;
private beforeBuffer: AudioBuffer;
private afterBuffer: AudioBuffer;
private selection?: Selection;
private clipboardData?: AudioBuffer;
private pastePosition?: number;
private onApply: (buffer: AudioBuffer) => void;
constructor(params: EditCommandParams) {
super();
this.type = params.type;
this.beforeBuffer = params.beforeBuffer;
this.afterBuffer = params.afterBuffer;
this.selection = params.selection;
this.clipboardData = params.clipboardData;
this.pastePosition = params.pastePosition;
this.onApply = params.onApply;
}
execute(): void {
this.onApply(this.afterBuffer);
}
undo(): void {
this.onApply(this.beforeBuffer);
}
getDescription(): string {
switch (this.type) {
case 'cut':
return 'Cut';
case 'delete':
return 'Delete';
case 'paste':
return 'Paste';
case 'trim':
return 'Trim';
default:
return 'Edit';
}
}
/**
* Get the type of edit operation
*/
getType(): EditCommandType {
return this.type;
}
/**
* Get the selection that was affected
*/
getSelection(): Selection | undefined {
return this.selection;
}
}
/**
* Factory functions to create edit commands
*/
export function createCutCommand(
buffer: AudioBuffer,
selection: Selection,
onApply: (buffer: AudioBuffer) => void
): EditCommand {
const afterBuffer = deleteBufferSegment(buffer, selection.start, selection.end);
return new EditCommand({
type: 'cut',
beforeBuffer: buffer,
afterBuffer,
selection,
onApply,
});
}
export function createDeleteCommand(
buffer: AudioBuffer,
selection: Selection,
onApply: (buffer: AudioBuffer) => void
): EditCommand {
const afterBuffer = deleteBufferSegment(buffer, selection.start, selection.end);
return new EditCommand({
type: 'delete',
beforeBuffer: buffer,
afterBuffer,
selection,
onApply,
});
}
export function createPasteCommand(
buffer: AudioBuffer,
clipboardData: AudioBuffer,
pastePosition: number,
onApply: (buffer: AudioBuffer) => void
): EditCommand {
const afterBuffer = insertBufferSegment(buffer, clipboardData, pastePosition);
return new EditCommand({
type: 'paste',
beforeBuffer: buffer,
afterBuffer,
clipboardData,
pastePosition,
onApply,
});
}
export function createTrimCommand(
buffer: AudioBuffer,
selection: Selection,
onApply: (buffer: AudioBuffer) => void
): EditCommand {
const afterBuffer = trimBuffer(buffer, selection.start, selection.end);
return new EditCommand({
type: 'trim',
beforeBuffer: buffer,
afterBuffer,
selection,
onApply,
});
}

View File

@@ -0,0 +1,242 @@
/**
* Effect commands for undo/redo system
*/
import { BaseCommand } from '../command';
export class EffectCommand extends BaseCommand {
private originalBuffer: AudioBuffer;
private modifiedBuffer: AudioBuffer;
private applyCallback: (buffer: AudioBuffer) => void;
private description: string;
constructor(
originalBuffer: AudioBuffer,
modifiedBuffer: AudioBuffer,
applyCallback: (buffer: AudioBuffer) => void,
description: string
) {
super();
this.originalBuffer = originalBuffer;
this.modifiedBuffer = modifiedBuffer;
this.applyCallback = applyCallback;
this.description = description;
}
getDescription(): string {
return this.description;
}
execute(): void {
this.applyCallback(this.modifiedBuffer);
}
undo(): void {
this.applyCallback(this.originalBuffer);
}
redo(): void {
this.execute();
}
}
/**
* Factory function to create effect commands
*/
export function createEffectCommand(
originalBuffer: AudioBuffer,
effectFunction: (buffer: AudioBuffer) => AudioBuffer | Promise<AudioBuffer>,
applyCallback: (buffer: AudioBuffer) => void,
description: string
): EffectCommand {
const result = effectFunction(originalBuffer);
const modifiedBuffer = result instanceof Promise ? originalBuffer : result;
return new EffectCommand(originalBuffer, modifiedBuffer, applyCallback, description);
}
/**
* Factory function to create async effect commands
*/
export async function createAsyncEffectCommand(
originalBuffer: AudioBuffer,
effectFunction: (buffer: AudioBuffer) => Promise<AudioBuffer>,
applyCallback: (buffer: AudioBuffer) => void,
description: string
): Promise<EffectCommand> {
const modifiedBuffer = await effectFunction(originalBuffer);
return new EffectCommand(originalBuffer, modifiedBuffer, applyCallback, description);
}
/**
* Factory for gain effect command
*/
export function createGainCommand(
buffer: AudioBuffer,
gainValue: number,
applyCallback: (buffer: AudioBuffer) => void
): EffectCommand {
return createEffectCommand(
buffer,
(buf) => {
// Import will happen at runtime
const { applyGain } = require('@/lib/audio/effects/gain');
return applyGain(buf, gainValue);
},
applyCallback,
`Apply Gain (${gainValue.toFixed(2)}x)`
);
}
/**
* Factory for normalize peak command
*/
export function createNormalizePeakCommand(
buffer: AudioBuffer,
targetPeak: number,
applyCallback: (buffer: AudioBuffer) => void
): EffectCommand {
return createEffectCommand(
buffer,
(buf) => {
const { normalizePeak } = require('@/lib/audio/effects/normalize');
return normalizePeak(buf, targetPeak);
},
applyCallback,
`Normalize to Peak (${targetPeak.toFixed(2)})`
);
}
/**
* Factory for normalize RMS command
*/
export function createNormalizeRMSCommand(
buffer: AudioBuffer,
targetRMS: number,
applyCallback: (buffer: AudioBuffer) => void
): EffectCommand {
return createEffectCommand(
buffer,
(buf) => {
const { normalizeRMS } = require('@/lib/audio/effects/normalize');
return normalizeRMS(buf, targetRMS);
},
applyCallback,
`Normalize to RMS (${targetRMS.toFixed(2)})`
);
}
/**
* Factory for fade in command
*/
export function createFadeInCommand(
buffer: AudioBuffer,
duration: number,
applyCallback: (buffer: AudioBuffer) => void
): EffectCommand {
return createEffectCommand(
buffer,
(buf) => {
const { applyFadeIn } = require('@/lib/audio/effects/fade');
return applyFadeIn(buf, duration);
},
applyCallback,
`Fade In (${duration.toFixed(2)}s)`
);
}
/**
* Factory for fade out command
*/
export function createFadeOutCommand(
buffer: AudioBuffer,
duration: number,
applyCallback: (buffer: AudioBuffer) => void
): EffectCommand {
return createEffectCommand(
buffer,
(buf) => {
const { applyFadeOut } = require('@/lib/audio/effects/fade');
return applyFadeOut(buf, duration);
},
applyCallback,
`Fade Out (${duration.toFixed(2)}s)`
);
}
/**
* Factory for reverse command
*/
export function createReverseCommand(
buffer: AudioBuffer,
applyCallback: (buffer: AudioBuffer) => void
): EffectCommand {
return createEffectCommand(
buffer,
(buf) => {
const { reverseAudio } = require('@/lib/audio/effects/reverse');
return reverseAudio(buf);
},
applyCallback,
'Reverse Audio'
);
}
/**
* Factory for low-pass filter command
*/
export async function createLowPassFilterCommand(
buffer: AudioBuffer,
frequency: number,
Q: number,
applyCallback: (buffer: AudioBuffer) => void
): Promise<EffectCommand> {
return createAsyncEffectCommand(
buffer,
async (buf) => {
const { applyLowPassFilter } = require('@/lib/audio/effects/filters');
return await applyLowPassFilter(buf, frequency, Q);
},
applyCallback,
`Low-Pass Filter (${frequency}Hz)`
);
}
/**
* Factory for high-pass filter command
*/
export async function createHighPassFilterCommand(
buffer: AudioBuffer,
frequency: number,
Q: number,
applyCallback: (buffer: AudioBuffer) => void
): Promise<EffectCommand> {
return createAsyncEffectCommand(
buffer,
async (buf) => {
const { applyHighPassFilter } = require('@/lib/audio/effects/filters');
return await applyHighPassFilter(buf, frequency, Q);
},
applyCallback,
`High-Pass Filter (${frequency}Hz)`
);
}
/**
* Factory for band-pass filter command
*/
export async function createBandPassFilterCommand(
buffer: AudioBuffer,
frequency: number,
Q: number,
applyCallback: (buffer: AudioBuffer) => void
): Promise<EffectCommand> {
return createAsyncEffectCommand(
buffer,
async (buf) => {
const { applyBandPassFilter } = require('@/lib/audio/effects/filters');
return await applyBandPassFilter(buf, frequency, Q);
},
applyCallback,
`Band-Pass Filter (${frequency}Hz)`
);
}

View File

@@ -0,0 +1,190 @@
/**
* Multi-track edit commands for audio operations across tracks
*/
import { BaseCommand } from '../command';
import type { Track } from '@/types/track';
import type { Selection } from '@/types/selection';
import {
extractBufferSegment,
deleteBufferSegment,
insertBufferSegment,
duplicateBufferSegment,
} from '@/lib/audio/buffer-utils';
export type MultiTrackEditType = 'cut' | 'copy' | 'delete' | 'paste' | 'duplicate';
export interface MultiTrackEditParams {
type: MultiTrackEditType;
trackId: string;
beforeBuffer: AudioBuffer | null;
afterBuffer: AudioBuffer | null;
selection?: Selection;
clipboardData?: AudioBuffer;
pastePosition?: number;
onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void;
}
/**
* Command for multi-track edit operations
*/
export class MultiTrackEditCommand extends BaseCommand {
private type: MultiTrackEditType;
private trackId: string;
private beforeBuffer: AudioBuffer | null;
private afterBuffer: AudioBuffer | null;
private selection?: Selection;
private clipboardData?: AudioBuffer;
private pastePosition?: number;
private onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void;
constructor(params: MultiTrackEditParams) {
super();
this.type = params.type;
this.trackId = params.trackId;
this.beforeBuffer = params.beforeBuffer;
this.afterBuffer = params.afterBuffer;
this.selection = params.selection;
this.clipboardData = params.clipboardData;
this.pastePosition = params.pastePosition;
this.onApply = params.onApply;
}
execute(): void {
// For copy, don't modify the buffer, just update selection
if (this.type === 'copy') {
this.onApply(this.trackId, this.beforeBuffer, this.selection || null);
} else {
this.onApply(this.trackId, this.afterBuffer, null);
}
}
undo(): void {
this.onApply(this.trackId, this.beforeBuffer, null);
}
getDescription(): string {
switch (this.type) {
case 'cut':
return 'Cut';
case 'copy':
return 'Copy';
case 'delete':
return 'Delete';
case 'paste':
return 'Paste';
case 'duplicate':
return 'Duplicate';
default:
return 'Edit';
}
}
}
/**
* Factory functions to create multi-track edit commands
*/
export function createMultiTrackCutCommand(
trackId: string,
buffer: AudioBuffer,
selection: Selection,
onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void
): MultiTrackEditCommand {
const afterBuffer = deleteBufferSegment(buffer, selection.start, selection.end);
return new MultiTrackEditCommand({
type: 'cut',
trackId,
beforeBuffer: buffer,
afterBuffer,
selection,
onApply,
});
}
export function createMultiTrackCopyCommand(
trackId: string,
buffer: AudioBuffer,
selection: Selection,
onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void
): MultiTrackEditCommand {
// Copy doesn't modify the buffer
return new MultiTrackEditCommand({
type: 'copy',
trackId,
beforeBuffer: buffer,
afterBuffer: buffer,
selection,
onApply,
});
}
export function createMultiTrackDeleteCommand(
trackId: string,
buffer: AudioBuffer,
selection: Selection,
onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void
): MultiTrackEditCommand {
const afterBuffer = deleteBufferSegment(buffer, selection.start, selection.end);
return new MultiTrackEditCommand({
type: 'delete',
trackId,
beforeBuffer: buffer,
afterBuffer,
selection,
onApply,
});
}
export function createMultiTrackPasteCommand(
trackId: string,
buffer: AudioBuffer | null,
clipboardData: AudioBuffer,
pastePosition: number,
onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void
): MultiTrackEditCommand {
const targetBuffer = buffer || createSilentBuffer(clipboardData.sampleRate, clipboardData.numberOfChannels, pastePosition);
const afterBuffer = insertBufferSegment(targetBuffer, clipboardData, pastePosition);
return new MultiTrackEditCommand({
type: 'paste',
trackId,
beforeBuffer: buffer,
afterBuffer,
clipboardData,
pastePosition,
onApply,
});
}
export function createMultiTrackDuplicateCommand(
trackId: string,
buffer: AudioBuffer,
selection: Selection,
onApply: (trackId: string, buffer: AudioBuffer | null, selection: Selection | null) => void
): MultiTrackEditCommand {
const afterBuffer = duplicateBufferSegment(buffer, selection.start, selection.end);
return new MultiTrackEditCommand({
type: 'duplicate',
trackId,
beforeBuffer: buffer,
afterBuffer,
selection,
onApply,
});
}
/**
* Helper function to create a silent buffer
*/
function createSilentBuffer(sampleRate: number, numberOfChannels: number, duration: number): AudioBuffer {
const audioContext = new OfflineAudioContext(
numberOfChannels,
Math.ceil(duration * sampleRate),
sampleRate
);
return audioContext.createBuffer(numberOfChannels, Math.ceil(duration * sampleRate), sampleRate);
}

View File

@@ -0,0 +1,156 @@
/**
* History Manager for Undo/Redo functionality
*/
import type { Command } from './command';
export interface HistoryState {
canUndo: boolean;
canRedo: boolean;
undoDescription: string | null;
redoDescription: string | null;
historySize: number;
}
export class HistoryManager {
private undoStack: Command[] = [];
private redoStack: Command[] = [];
private maxHistorySize: number;
private listeners: Set<() => void> = new Set();
constructor(maxHistorySize: number = 50) {
this.maxHistorySize = maxHistorySize;
}
/**
* Execute a command and add it to history
*/
execute(command: Command): void {
command.execute();
this.undoStack.push(command);
// Limit history size
if (this.undoStack.length > this.maxHistorySize) {
this.undoStack.shift();
}
// Clear redo stack when new command is executed
this.redoStack = [];
this.notifyListeners();
}
/**
* Undo the last command
*/
undo(): boolean {
if (!this.canUndo()) return false;
const command = this.undoStack.pop()!;
command.undo();
this.redoStack.push(command);
this.notifyListeners();
return true;
}
/**
* Redo the last undone command
*/
redo(): boolean {
if (!this.canRedo()) return false;
const command = this.redoStack.pop()!;
command.redo();
this.undoStack.push(command);
this.notifyListeners();
return true;
}
/**
* Check if undo is available
*/
canUndo(): boolean {
return this.undoStack.length > 0;
}
/**
* Check if redo is available
*/
canRedo(): boolean {
return this.redoStack.length > 0;
}
/**
* Get current history state
*/
getState(): HistoryState {
return {
canUndo: this.canUndo(),
canRedo: this.canRedo(),
undoDescription: this.getUndoDescription(),
redoDescription: this.getRedoDescription(),
historySize: this.undoStack.length,
};
}
/**
* Get description of next undo action
*/
getUndoDescription(): string | null {
if (!this.canUndo()) return null;
return this.undoStack[this.undoStack.length - 1].getDescription();
}
/**
* Get description of next redo action
*/
getRedoDescription(): string | null {
if (!this.canRedo()) return null;
return this.redoStack[this.redoStack.length - 1].getDescription();
}
/**
* Clear all history
*/
clear(): void {
this.undoStack = [];
this.redoStack = [];
this.notifyListeners();
}
/**
* Subscribe to history changes
*/
subscribe(listener: () => void): () => void {
this.listeners.add(listener);
return () => this.listeners.delete(listener);
}
/**
* Notify all listeners of history changes
*/
private notifyListeners(): void {
this.listeners.forEach((listener) => listener());
}
/**
* Get current history size
*/
getHistorySize(): number {
return this.undoStack.length;
}
/**
* Set maximum history size
*/
setMaxHistorySize(size: number): void {
this.maxHistorySize = size;
// Trim undo stack if needed
while (this.undoStack.length > this.maxHistorySize) {
this.undoStack.shift();
}
this.notifyListeners();
}
}

View File

@@ -7,13 +7,14 @@ import { decodeAudioFile, formatDuration } from '@/lib/audio/decoder';
export interface UseAudioPlayerReturn { export interface UseAudioPlayerReturn {
// File management // File management
loadFile: (file: File) => Promise<void>; loadFile: (file: File) => Promise<void>;
loadBuffer: (buffer: AudioBuffer, name?: string) => void;
clearFile: () => void; clearFile: () => void;
// Playback controls // Playback controls
play: () => Promise<void>; play: () => Promise<void>;
pause: () => void; pause: () => void;
stop: () => void; stop: () => void;
seek: (time: number) => Promise<void>; seek: (time: number, autoPlay?: boolean) => Promise<void>;
// Volume control // Volume control
setVolume: (volume: number) => void; setVolume: (volume: number) => void;
@@ -99,6 +100,21 @@ export function useAudioPlayer(): UseAudioPlayerReturn {
[player] [player]
); );
const loadBuffer = React.useCallback(
(buffer: AudioBuffer, name?: string) => {
if (!player) return;
player.loadBuffer(buffer);
setAudioBuffer(buffer);
if (name) setFileName(name);
setDuration(buffer.duration);
setCurrentTime(0);
setIsPlaying(false);
setIsPaused(false);
},
[player]
);
const clearFile = React.useCallback(() => { const clearFile = React.useCallback(() => {
if (!player) return; if (!player) return;
@@ -143,11 +159,16 @@ export function useAudioPlayer(): UseAudioPlayerReturn {
}, [player]); }, [player]);
const seek = React.useCallback( const seek = React.useCallback(
async (time: number) => { async (time: number, autoPlay: boolean = false) => {
if (!player) return; if (!player) return;
await player.seek(time); await player.seek(time, autoPlay);
setCurrentTime(time); setCurrentTime(time);
// Update state based on what actually happened
const state = player.getState();
setIsPlaying(state.isPlaying);
setIsPaused(state.isPaused);
}, },
[player] [player]
); );
@@ -173,6 +194,7 @@ export function useAudioPlayer(): UseAudioPlayerReturn {
return { return {
loadFile, loadFile,
loadBuffer,
clearFile, clearFile,
play, play,
pause, pause,

138
lib/hooks/useAudioWorker.ts Normal file
View File

@@ -0,0 +1,138 @@
'use client';
import { useRef, useEffect, useCallback } from 'react';
import type { WorkerMessage, WorkerResponse } from '@/lib/workers/audio.worker';
/**
* Hook to use the audio Web Worker for heavy computations
* Automatically manages worker lifecycle and message passing
*/
export function useAudioWorker() {
const workerRef = useRef<Worker | null>(null);
const callbacksRef = useRef<Map<string, (result: any, error?: string) => void>>(new Map());
const messageIdRef = useRef(0);
// Initialize worker
useEffect(() => {
// Create worker from the audio worker file
workerRef.current = new Worker(
new URL('../workers/audio.worker.ts', import.meta.url),
{ type: 'module' }
);
// Handle messages from worker
workerRef.current.onmessage = (event: MessageEvent<WorkerResponse>) => {
const { id, result, error } = event.data;
const callback = callbacksRef.current.get(id);
if (callback) {
callback(result, error);
callbacksRef.current.delete(id);
}
};
// Cleanup on unmount
return () => {
if (workerRef.current) {
workerRef.current.terminate();
workerRef.current = null;
}
callbacksRef.current.clear();
};
}, []);
// Send message to worker
const sendMessage = useCallback(
<T = any>(type: WorkerMessage['type'], payload: any): Promise<T> => {
return new Promise((resolve, reject) => {
if (!workerRef.current) {
reject(new Error('Worker not initialized'));
return;
}
const id = `msg-${++messageIdRef.current}`;
const message: WorkerMessage = { id, type, payload };
callbacksRef.current.set(id, (result, error) => {
if (error) {
reject(new Error(error));
} else {
resolve(result);
}
});
workerRef.current.postMessage(message);
});
},
[]
);
// API methods
const generatePeaks = useCallback(
async (channelData: Float32Array, width: number): Promise<Float32Array> => {
const result = await sendMessage<Float32Array>('generatePeaks', {
channelData,
width,
});
return new Float32Array(result);
},
[sendMessage]
);
const generateMinMaxPeaks = useCallback(
async (
channelData: Float32Array,
width: number
): Promise<{ min: Float32Array; max: Float32Array }> => {
const result = await sendMessage<{ min: Float32Array; max: Float32Array }>(
'generateMinMaxPeaks',
{ channelData, width }
);
return {
min: new Float32Array(result.min),
max: new Float32Array(result.max),
};
},
[sendMessage]
);
const normalizePeaks = useCallback(
async (peaks: Float32Array, targetMax: number = 1): Promise<Float32Array> => {
const result = await sendMessage<Float32Array>('normalizePeaks', {
peaks,
targetMax,
});
return new Float32Array(result);
},
[sendMessage]
);
const analyzeAudio = useCallback(
async (
channelData: Float32Array
): Promise<{
peak: number;
rms: number;
crestFactor: number;
dynamicRange: number;
}> => {
return sendMessage('analyzeAudio', { channelData });
},
[sendMessage]
);
const findPeak = useCallback(
async (channelData: Float32Array): Promise<number> => {
return sendMessage<number>('findPeak', { channelData });
},
[sendMessage]
);
return {
generatePeaks,
generateMinMaxPeaks,
normalizePeaks,
analyzeAudio,
findPeak,
};
}

View File

@@ -0,0 +1,173 @@
/**
* Hook for recording automation data during playback
* Supports write, touch, and latch modes
*/
import { useCallback, useRef } from 'react';
import type { Track } from '@/types/track';
import type { AutomationPoint, AutomationMode } from '@/types/automation';
export interface AutomationRecordingState {
isRecording: boolean;
recordingLaneId: string | null;
touchActive: boolean; // For touch mode - tracks if control is being touched
latchTriggered: boolean; // For latch mode - tracks if recording has started
}
export function useAutomationRecording(
track: Track,
onUpdateTrack: (trackId: string, updates: Partial<Track>) => void
) {
const recordingStateRef = useRef<Map<string, AutomationRecordingState>>(new Map());
const recordingIntervalRef = useRef<Map<string, number>>(new Map());
const lastRecordedValueRef = useRef<Map<string, number>>(new Map());
/**
* Start recording automation for a specific lane
*/
const startRecording = useCallback((laneId: string, mode: AutomationMode) => {
const state: AutomationRecordingState = {
isRecording: mode === 'write',
recordingLaneId: laneId,
touchActive: false,
latchTriggered: false,
};
recordingStateRef.current.set(laneId, state);
}, []);
/**
* Stop recording automation for a specific lane
*/
const stopRecording = useCallback((laneId: string) => {
recordingStateRef.current.delete(laneId);
const intervalId = recordingIntervalRef.current.get(laneId);
if (intervalId) {
clearInterval(intervalId);
recordingIntervalRef.current.delete(laneId);
}
lastRecordedValueRef.current.delete(laneId);
}, []);
/**
* Record a single automation point
*/
const recordPoint = useCallback((
laneId: string,
currentTime: number,
value: number,
mode: AutomationMode
) => {
const lane = track.automation.lanes.find(l => l.id === laneId);
if (!lane) return;
const state = recordingStateRef.current.get(laneId);
if (!state) return;
// Check if we should record based on mode
let shouldRecord = false;
switch (mode) {
case 'write':
// Always record in write mode
shouldRecord = true;
break;
case 'touch':
// Only record when control is being touched
shouldRecord = state.touchActive;
break;
case 'latch':
// Record from first touch until stop
if (state.touchActive && !state.latchTriggered) {
state.latchTriggered = true;
}
shouldRecord = state.latchTriggered;
break;
default:
shouldRecord = false;
}
if (!shouldRecord) return;
// Check if value has changed significantly (avoid redundant points)
const lastValue = lastRecordedValueRef.current.get(laneId);
if (lastValue !== undefined && Math.abs(lastValue - value) < 0.001) {
return; // Skip if value hasn't changed
}
lastRecordedValueRef.current.set(laneId, value);
// In write mode, clear existing points in the time range
let updatedPoints = [...lane.points];
if (mode === 'write') {
// Remove points that are within a small time window of current time
updatedPoints = updatedPoints.filter(p =>
Math.abs(p.time - currentTime) > 0.05 // 50ms threshold
);
}
// Add new point
const newPoint: AutomationPoint = {
id: `point-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
time: currentTime,
value,
curve: 'linear',
};
updatedPoints.push(newPoint);
// Sort points by time
updatedPoints.sort((a, b) => a.time - b.time);
// Update track with new automation points
const updatedLanes = track.automation.lanes.map(l =>
l.id === laneId ? { ...l, points: updatedPoints } : l
);
onUpdateTrack(track.id, {
automation: {
...track.automation,
lanes: updatedLanes,
},
});
}, [track, onUpdateTrack]);
/**
* Set touch state for touch mode
*/
const setTouchActive = useCallback((laneId: string, active: boolean) => {
const state = recordingStateRef.current.get(laneId);
if (state) {
state.touchActive = active;
}
}, []);
/**
* Check if a lane is currently recording
*/
const isRecordingLane = useCallback((laneId: string): boolean => {
const state = recordingStateRef.current.get(laneId);
return state?.isRecording ?? false;
}, []);
/**
* Cleanup - stop all recording
*/
const cleanup = useCallback(() => {
recordingStateRef.current.forEach((_, laneId) => {
stopRecording(laneId);
});
recordingStateRef.current.clear();
}, [stopRecording]);
return {
startRecording,
stopRecording,
recordPoint,
setTouchActive,
isRecordingLane,
cleanup,
};
}

122
lib/hooks/useEffectChain.ts Normal file
View File

@@ -0,0 +1,122 @@
import { useState, useCallback, useEffect } from 'react';
import type {
EffectChain,
EffectPreset,
ChainEffect,
EffectParameters,
} from '@/lib/audio/effects/chain';
import {
createEffectChain,
addEffectToChain,
removeEffectFromChain,
toggleEffect,
updateEffectParameters,
reorderEffects,
loadPreset,
} from '@/lib/audio/effects/chain';
const STORAGE_KEY_CHAIN = 'audio-ui-effect-chain';
const STORAGE_KEY_PRESETS = 'audio-ui-effect-presets';
export function useEffectChain() {
const [chain, setChain] = useState<EffectChain>(() => {
if (typeof window === 'undefined') return createEffectChain('Main Chain');
try {
const saved = localStorage.getItem(STORAGE_KEY_CHAIN);
return saved ? JSON.parse(saved) : createEffectChain('Main Chain');
} catch {
return createEffectChain('Main Chain');
}
});
const [presets, setPresets] = useState<EffectPreset[]>(() => {
if (typeof window === 'undefined') return [];
try {
const saved = localStorage.getItem(STORAGE_KEY_PRESETS);
return saved ? JSON.parse(saved) : [];
} catch {
return [];
}
});
// Save chain to localStorage whenever it changes
useEffect(() => {
if (typeof window === 'undefined') return;
try {
localStorage.setItem(STORAGE_KEY_CHAIN, JSON.stringify(chain));
} catch (error) {
console.error('Failed to save effect chain:', error);
}
}, [chain]);
// Save presets to localStorage whenever they change
useEffect(() => {
if (typeof window === 'undefined') return;
try {
localStorage.setItem(STORAGE_KEY_PRESETS, JSON.stringify(presets));
} catch (error) {
console.error('Failed to save presets:', error);
}
}, [presets]);
const addEffect = useCallback((effect: ChainEffect) => {
setChain((prev) => addEffectToChain(prev, effect));
}, []);
const removeEffect = useCallback((effectId: string) => {
setChain((prev) => removeEffectFromChain(prev, effectId));
}, []);
const toggleEffectEnabled = useCallback((effectId: string) => {
setChain((prev) => toggleEffect(prev, effectId));
}, []);
const updateEffect = useCallback(
(effectId: string, parameters: EffectParameters) => {
setChain((prev) => updateEffectParameters(prev, effectId, parameters));
},
[]
);
const reorder = useCallback((fromIndex: number, toIndex: number) => {
setChain((prev) => reorderEffects(prev, fromIndex, toIndex));
}, []);
const clearChain = useCallback(() => {
setChain((prev) => ({ ...prev, effects: [] }));
}, []);
const savePreset = useCallback((preset: EffectPreset) => {
setPresets((prev) => [...prev, preset]);
}, []);
const loadPresetToChain = useCallback((preset: EffectPreset) => {
const loadedChain = loadPreset(preset);
setChain(loadedChain);
}, []);
const deletePreset = useCallback((presetId: string) => {
setPresets((prev) => prev.filter((p) => p.id !== presetId));
}, []);
const importPreset = useCallback((preset: EffectPreset) => {
setPresets((prev) => [...prev, preset]);
}, []);
return {
chain,
presets,
addEffect,
removeEffect,
toggleEffectEnabled,
updateEffect,
reorder,
clearChain,
savePreset,
loadPresetToChain,
deletePreset,
importPreset,
};
}

54
lib/hooks/useHistory.ts Normal file
View File

@@ -0,0 +1,54 @@
'use client';
import * as React from 'react';
import { HistoryManager } from '@/lib/history/history-manager';
import type { HistoryState } from '@/lib/history/history-manager';
import type { Command } from '@/lib/history/command';
export interface UseHistoryReturn {
execute: (command: Command) => void;
undo: () => boolean;
redo: () => boolean;
clear: () => void;
state: HistoryState;
}
export function useHistory(maxHistorySize: number = 50): UseHistoryReturn {
const [manager] = React.useState(() => new HistoryManager(maxHistorySize));
const [state, setState] = React.useState<HistoryState>(manager.getState());
React.useEffect(() => {
const unsubscribe = manager.subscribe(() => {
setState(manager.getState());
});
return unsubscribe;
}, [manager]);
const execute = React.useCallback(
(command: Command) => {
manager.execute(command);
},
[manager]
);
const undo = React.useCallback(() => {
return manager.undo();
}, [manager]);
const redo = React.useCallback(() => {
return manager.redo();
}, [manager]);
const clear = React.useCallback(() => {
manager.clear();
}, [manager]);
return {
execute,
undo,
redo,
clear,
state,
};
}

70
lib/hooks/useMarkers.ts Normal file
View File

@@ -0,0 +1,70 @@
'use client';
import { useState, useCallback } from 'react';
import type { Marker, CreateMarkerInput } from '@/types/marker';
export function useMarkers() {
const [markers, setMarkers] = useState<Marker[]>([]);
const addMarker = useCallback((input: CreateMarkerInput): Marker => {
const marker: Marker = {
...input,
id: `marker-${Date.now()}-${Math.random()}`,
};
setMarkers((prev) => [...prev, marker].sort((a, b) => a.time - b.time));
return marker;
}, []);
const updateMarker = useCallback((id: string, updates: Partial<Marker>) => {
setMarkers((prev) => {
const updated = prev.map((m) =>
m.id === id ? { ...m, ...updates } : m
);
// Re-sort if time changed
if ('time' in updates) {
return updated.sort((a, b) => a.time - b.time);
}
return updated;
});
}, []);
const removeMarker = useCallback((id: string) => {
setMarkers((prev) => prev.filter((m) => m.id !== id));
}, []);
const clearMarkers = useCallback(() => {
setMarkers([]);
}, []);
const getMarkerAt = useCallback((time: number, tolerance: number = 0.1): Marker | undefined => {
return markers.find((m) => {
if (m.type === 'point') {
return Math.abs(m.time - time) <= tolerance;
} else {
// For regions, check if time is within the region
return m.endTime !== undefined && time >= m.time && time <= m.endTime;
}
});
}, [markers]);
const getNextMarker = useCallback((time: number): Marker | undefined => {
return markers.find((m) => m.time > time);
}, [markers]);
const getPreviousMarker = useCallback((time: number): Marker | undefined => {
const previous = markers.filter((m) => m.time < time);
return previous[previous.length - 1];
}, [markers]);
return {
markers,
addMarker,
updateMarker,
removeMarker,
clearMarkers,
getMarkerAt,
getNextMarker,
getPreviousMarker,
setMarkers,
};
}

View File

@@ -0,0 +1,69 @@
import { useState, useCallback } from 'react';
import type { Track } from '@/types/track';
import { createTrack, createTrackFromBuffer } from '@/lib/audio/track-utils';
export function useMultiTrack() {
// Note: localStorage persistence disabled in favor of IndexedDB project management
const [tracks, setTracks] = useState<Track[]>([]);
const addTrack = useCallback((name?: string, height?: number) => {
const track = createTrack(name, undefined, height);
setTracks((prev) => [...prev, track]);
return track;
}, []);
const addTrackFromBuffer = useCallback((buffer: AudioBuffer, name?: string, height?: number) => {
const track = createTrackFromBuffer(buffer, name, undefined, height);
setTracks((prev) => [...prev, track]);
return track;
}, []);
const removeTrack = useCallback((trackId: string) => {
setTracks((prev) => prev.filter((t) => t.id !== trackId));
}, []);
const updateTrack = useCallback((trackId: string, updates: Partial<Track>) => {
setTracks((prev) =>
prev.map((track) =>
track.id === trackId ? { ...track, ...updates } : track
)
);
}, []);
const clearTracks = useCallback(() => {
setTracks([]);
}, []);
const reorderTracks = useCallback((fromIndex: number, toIndex: number) => {
setTracks((prev) => {
const newTracks = [...prev];
const [movedTrack] = newTracks.splice(fromIndex, 1);
newTracks.splice(toIndex, 0, movedTrack);
return newTracks;
});
}, []);
const setTrackBuffer = useCallback((trackId: string, buffer: AudioBuffer) => {
setTracks((prev) =>
prev.map((track) =>
track.id === trackId ? { ...track, audioBuffer: buffer } : track
)
);
}, []);
const loadTracks = useCallback((tracksToLoad: Track[]) => {
setTracks(tracksToLoad);
}, []);
return {
tracks,
addTrack,
addTrackFromBuffer,
removeTrack,
updateTrack,
clearTracks,
reorderTracks,
setTrackBuffer,
loadTracks,
};
}

View File

@@ -0,0 +1,955 @@
import { useState, useCallback, useRef, useEffect } from 'react';
import { getAudioContext } from '@/lib/audio/context';
import type { Track } from '@/types/track';
import { getTrackGain } from '@/lib/audio/track-utils';
import { applyEffectChain, updateEffectParameters, toggleEffectBypass, type EffectNodeInfo } from '@/lib/audio/effects/processor';
import { evaluateAutomationLinear } from '@/lib/audio/automation-utils';
export interface MultiTrackPlayerState {
isPlaying: boolean;
currentTime: number;
duration: number;
loopEnabled: boolean;
loopStart: number;
loopEnd: number;
playbackRate: number;
}
export interface TrackLevel {
trackId: string;
level: number;
}
export interface AutomationRecordingCallback {
(trackId: string, laneId: string, currentTime: number, value: number): void;
}
export function useMultiTrackPlayer(
tracks: Track[],
masterVolume: number = 1,
onRecordAutomation?: AutomationRecordingCallback
) {
const [isPlaying, setIsPlaying] = useState(false);
const [currentTime, setCurrentTime] = useState(0);
const [duration, setDuration] = useState(0);
const [trackLevels, setTrackLevels] = useState<Record<string, number>>({});
const [masterPeakLevel, setMasterPeakLevel] = useState(0);
const [masterRmsLevel, setMasterRmsLevel] = useState(0);
const [masterIsClipping, setMasterIsClipping] = useState(false);
const [loopEnabled, setLoopEnabled] = useState(false);
const [loopStart, setLoopStart] = useState(0);
const [loopEnd, setLoopEnd] = useState(0);
const [playbackRate, setPlaybackRate] = useState(1.0);
const audioContextRef = useRef<AudioContext | null>(null);
const sourceNodesRef = useRef<AudioBufferSourceNode[]>([]);
const gainNodesRef = useRef<GainNode[]>([]);
const panNodesRef = useRef<StereoPannerNode[]>([]);
const analyserNodesRef = useRef<AnalyserNode[]>([]);
const effectNodesRef = useRef<EffectNodeInfo[][]>([]); // Effect nodes per track
const masterGainNodeRef = useRef<GainNode | null>(null);
const masterAnalyserRef = useRef<AnalyserNode | null>(null);
const masterLevelMonitorFrameRef = useRef<number | null>(null);
const startTimeRef = useRef<number>(0);
const pausedAtRef = useRef<number>(0);
const animationFrameRef = useRef<number | null>(null);
const levelMonitorFrameRef = useRef<number | null>(null);
const automationFrameRef = useRef<number | null>(null);
const isMonitoringLevelsRef = useRef<boolean>(false);
const tracksRef = useRef<Track[]>(tracks); // Always keep latest tracks
const lastRecordedValuesRef = useRef<Map<string, number>>(new Map()); // Track last recorded values to detect changes
const onRecordAutomationRef = useRef<AutomationRecordingCallback | undefined>(onRecordAutomation);
const loopEnabledRef = useRef<boolean>(false);
const loopStartRef = useRef<number>(0);
const loopEndRef = useRef<number>(0);
const playbackRateRef = useRef<number>(1.0);
const isPlayingRef = useRef<boolean>(false);
// Keep tracksRef in sync with tracks prop
useEffect(() => {
tracksRef.current = tracks;
}, [tracks]);
// Keep loop refs in sync with state
useEffect(() => {
loopEnabledRef.current = loopEnabled;
loopStartRef.current = loopStart;
loopEndRef.current = loopEnd;
}, [loopEnabled, loopStart, loopEnd]);
// Keep playbackRate ref in sync with state
useEffect(() => {
playbackRateRef.current = playbackRate;
}, [playbackRate]);
// Keep onRecordAutomationRef in sync
useEffect(() => {
onRecordAutomationRef.current = onRecordAutomation;
}, [onRecordAutomation]);
// Calculate total duration from all tracks
useEffect(() => {
let maxDuration = 0;
for (const track of tracks) {
if (track.audioBuffer) {
maxDuration = Math.max(maxDuration, track.audioBuffer.duration);
}
}
setDuration(maxDuration);
// Initialize loop end to duration when duration changes
if (loopEnd === 0 || loopEnd > maxDuration) {
setLoopEnd(maxDuration);
}
}, [tracks, loopEnd]);
// Convert linear amplitude to dB scale normalized to 0-1 range
const linearToDbScale = (linear: number): number => {
if (linear === 0) return 0;
// Convert to dB (20 * log10(linear))
const db = 20 * Math.log10(linear);
// Normalize -60dB to 0dB range to 0-1
// -60dB or lower = 0%, 0dB = 100%
const minDb = -60;
const maxDb = 0;
const normalized = (db - minDb) / (maxDb - minDb);
// Clamp to 0-1 range
return Math.max(0, Math.min(1, normalized));
};
// Monitor playback levels for all tracks
const monitorPlaybackLevels = useCallback(() => {
if (!isMonitoringLevelsRef.current || analyserNodesRef.current.length === 0) return;
const levels: Record<string, number> = {};
analyserNodesRef.current.forEach((analyser, index) => {
const track = tracksRef.current[index];
if (!track) return;
const dataArray = new Float32Array(analyser.fftSize);
analyser.getFloatTimeDomainData(dataArray);
// Calculate peak level using float data (-1 to +1 range)
let peak = 0;
for (let i = 0; i < dataArray.length; i++) {
const abs = Math.abs(dataArray[i]);
if (abs > peak) {
peak = abs;
}
}
// Store raw linear peak (will be converted to dB in the fader component)
levels[track.id] = peak;
});
setTrackLevels(levels);
levelMonitorFrameRef.current = requestAnimationFrame(monitorPlaybackLevels);
}, []);
// Monitor master output levels (peak and RMS)
const monitorMasterLevels = useCallback(() => {
if (!masterAnalyserRef.current) {
return;
}
const analyser = masterAnalyserRef.current;
const bufferLength = analyser.fftSize;
const dataArray = new Float32Array(bufferLength);
analyser.getFloatTimeDomainData(dataArray);
// Calculate peak level (max absolute value)
let peak = 0;
for (let i = 0; i < bufferLength; i++) {
const abs = Math.abs(dataArray[i]);
if (abs > peak) {
peak = abs;
}
}
// Calculate RMS level (root mean square)
let sumSquares = 0;
for (let i = 0; i < bufferLength; i++) {
sumSquares += dataArray[i] * dataArray[i];
}
const rms = Math.sqrt(sumSquares / bufferLength);
// Detect clipping (signal >= 1.0)
const isClipping = peak >= 1.0;
setMasterPeakLevel(peak);
setMasterRmsLevel(rms);
if (isClipping) {
setMasterIsClipping(true);
}
masterLevelMonitorFrameRef.current = requestAnimationFrame(monitorMasterLevels);
}, []);
// Apply automation values during playback
const applyAutomation = useCallback(() => {
if (!audioContextRef.current) return;
const currentTime = pausedAtRef.current + (audioContextRef.current.currentTime - startTimeRef.current);
tracksRef.current.forEach((track, index) => {
// Apply volume automation
const volumeLane = track.automation.lanes.find(lane => lane.parameterId === 'volume');
if (volumeLane) {
let volumeValue: number | undefined;
// In write mode, record current track volume (only if value changed)
if (volumeLane.mode === 'write' && onRecordAutomationRef.current) {
volumeValue = track.volume;
const lastValue = lastRecordedValuesRef.current.get(`${track.id}-volume`);
// Only record if value has changed
if (lastValue === undefined || Math.abs(lastValue - volumeValue) > 0.001) {
lastRecordedValuesRef.current.set(`${track.id}-volume`, volumeValue);
onRecordAutomationRef.current(track.id, volumeLane.id, currentTime, volumeValue);
}
} else if (volumeLane.points.length > 0) {
// Otherwise play back automation
volumeValue = evaluateAutomationLinear(volumeLane.points, currentTime);
}
if (volumeValue !== undefined && gainNodesRef.current[index]) {
const trackGain = getTrackGain(track, tracks);
// Apply both track gain (mute/solo) and automated volume
gainNodesRef.current[index].gain.setValueAtTime(
trackGain * volumeValue,
audioContextRef.current!.currentTime
);
}
}
// Apply pan automation
const panLane = track.automation.lanes.find(lane => lane.parameterId === 'pan');
if (panLane) {
let automatedValue: number | undefined;
// In write mode, record current track pan (only if value changed)
if (panLane.mode === 'write' && onRecordAutomationRef.current) {
automatedValue = (track.pan + 1) / 2; // Convert -1 to 1 -> 0 to 1
const lastValue = lastRecordedValuesRef.current.get(`${track.id}-pan`);
// Only record if value has changed
if (lastValue === undefined || Math.abs(lastValue - automatedValue) > 0.001) {
lastRecordedValuesRef.current.set(`${track.id}-pan`, automatedValue);
onRecordAutomationRef.current(track.id, panLane.id, currentTime, automatedValue);
}
} else if (panLane.points.length > 0) {
// Otherwise play back automation
automatedValue = evaluateAutomationLinear(panLane.points, currentTime);
}
if (automatedValue !== undefined && panNodesRef.current[index]) {
// Pan automation values are 0-1, but StereoPannerNode expects -1 to 1
const panValue = (automatedValue * 2) - 1;
panNodesRef.current[index].pan.setValueAtTime(
panValue,
audioContextRef.current!.currentTime
);
}
}
// Apply effect parameter automation
track.automation.lanes.forEach(lane => {
// Check if this is an effect parameter (format: effect.{effectId}.{parameterName})
if (lane.parameterId.startsWith('effect.')) {
const parts = lane.parameterId.split('.');
if (parts.length === 3) {
const effectId = parts[1];
const paramName = parts[2];
// Find the effect in the track's effect chain
const effectIndex = track.effectChain.effects.findIndex(e => e.id === effectId);
const effect = track.effectChain.effects[effectIndex];
if (effectIndex >= 0 && effect) {
let automatedValue: number | undefined;
// In write mode, record current effect parameter value (only if value changed)
if (lane.mode === 'write' && onRecordAutomationRef.current && effect.parameters) {
const currentValue = (effect.parameters as any)[paramName];
if (currentValue !== undefined) {
// Normalize value to 0-1 range
const range = lane.valueRange.max - lane.valueRange.min;
const normalizedValue = (currentValue - lane.valueRange.min) / range;
const lastValue = lastRecordedValuesRef.current.get(`${track.id}-effect-${effectId}-${paramName}`);
// Only record if value has changed
if (lastValue === undefined || Math.abs(lastValue - normalizedValue) > 0.001) {
lastRecordedValuesRef.current.set(`${track.id}-effect-${effectId}-${paramName}`, normalizedValue);
onRecordAutomationRef.current(track.id, lane.id, currentTime, normalizedValue);
}
}
} else if (lane.points.length > 0) {
// Otherwise play back automation
automatedValue = evaluateAutomationLinear(lane.points, currentTime);
}
// Apply the automated value to the effect
if (automatedValue !== undefined && effectNodesRef.current[index] && effectNodesRef.current[index][effectIndex]) {
const effectNodeInfo = effectNodesRef.current[index][effectIndex];
// Convert normalized 0-1 value to actual parameter range
const actualValue = lane.valueRange.min + (automatedValue * (lane.valueRange.max - lane.valueRange.min));
// Update the effect parameter
if (effect.parameters) {
const updatedParams = { ...effect.parameters, [paramName]: actualValue } as any;
updateEffectParameters(audioContextRef.current!, effectNodeInfo, {
...effect,
parameters: updatedParams
});
}
}
}
}
}
});
});
automationFrameRef.current = requestAnimationFrame(applyAutomation);
}, []);
const updatePlaybackPosition = useCallback(() => {
if (!audioContextRef.current || !isPlayingRef.current) return;
const elapsed = (audioContextRef.current.currentTime - startTimeRef.current) * playbackRateRef.current;
const newTime = pausedAtRef.current + elapsed;
// Check if loop is enabled and we've reached the loop end
if (loopEnabledRef.current && loopEndRef.current > loopStartRef.current && newTime >= loopEndRef.current) {
// Loop back to start
pausedAtRef.current = loopStartRef.current;
startTimeRef.current = audioContextRef.current.currentTime;
setCurrentTime(loopStartRef.current);
// Restart all sources from loop start
sourceNodesRef.current.forEach((node, index) => {
try {
node.stop();
node.disconnect();
} catch (e) {
// Ignore errors from already stopped nodes
}
});
// Re-trigger play from loop start
const tracks = tracksRef.current;
const audioContext = audioContextRef.current;
// Clear old sources
sourceNodesRef.current = [];
// Create new sources starting from loop start
for (const track of tracks) {
if (!track.audioBuffer) continue;
const source = audioContext.createBufferSource();
source.buffer = track.audioBuffer;
source.playbackRate.value = playbackRateRef.current;
// Connect to existing nodes (gain, pan, effects are still connected)
const trackIndex = tracks.indexOf(track);
source.connect(analyserNodesRef.current[trackIndex]);
// Start from loop start position
source.start(0, loopStartRef.current);
sourceNodesRef.current.push(source);
}
animationFrameRef.current = requestAnimationFrame(updatePlaybackPosition);
return;
}
if (newTime >= duration) {
setIsPlaying(false);
isMonitoringLevelsRef.current = false;
setCurrentTime(0);
pausedAtRef.current = 0;
setTrackLevels({});
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
if (levelMonitorFrameRef.current) {
cancelAnimationFrame(levelMonitorFrameRef.current);
levelMonitorFrameRef.current = null;
}
if (automationFrameRef.current) {
cancelAnimationFrame(automationFrameRef.current);
automationFrameRef.current = null;
}
return;
}
setCurrentTime(newTime);
animationFrameRef.current = requestAnimationFrame(updatePlaybackPosition);
}, [duration]);
const play = useCallback(() => {
if (tracks.length === 0 || tracks.every(t => !t.audioBuffer)) return;
const audioContext = getAudioContext();
audioContextRef.current = audioContext;
// Stop any existing playback
sourceNodesRef.current.forEach(node => {
try {
node.stop();
node.disconnect();
} catch (e) {
// Ignore errors from already stopped nodes
}
});
gainNodesRef.current.forEach(node => node.disconnect());
panNodesRef.current.forEach(node => node.disconnect());
if (masterGainNodeRef.current) {
masterGainNodeRef.current.disconnect();
}
sourceNodesRef.current = [];
gainNodesRef.current = [];
panNodesRef.current = [];
analyserNodesRef.current = [];
effectNodesRef.current = [];
// Create master gain node with analyser for metering
const masterGain = audioContext.createGain();
masterGain.gain.setValueAtTime(masterVolume, audioContext.currentTime);
const masterAnalyser = audioContext.createAnalyser();
masterAnalyser.fftSize = 256;
masterAnalyser.smoothingTimeConstant = 0.8;
// Connect: masterGain -> analyser -> destination
masterGain.connect(masterAnalyser);
masterAnalyser.connect(audioContext.destination);
masterGainNodeRef.current = masterGain;
masterAnalyserRef.current = masterAnalyser;
// Create audio graph for each track
for (const track of tracks) {
if (!track.audioBuffer) continue;
const source = audioContext.createBufferSource();
source.buffer = track.audioBuffer;
const gainNode = audioContext.createGain();
const panNode = audioContext.createStereoPanner();
const analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 256;
analyserNode.smoothingTimeConstant = 0.8;
// Set gain based on track volume and solo/mute state
const trackGain = getTrackGain(track, tracks);
gainNode.gain.setValueAtTime(trackGain, audioContext.currentTime);
// Set pan
panNode.pan.setValueAtTime(track.pan, audioContext.currentTime);
// Connect: source -> analyser -> gain -> pan -> effects -> master gain -> destination
// Analyser is before gain so it shows raw audio levels independent of volume fader
source.connect(analyserNode);
analyserNode.connect(gainNode);
gainNode.connect(panNode);
// Apply effect chain
console.log('[MultiTrackPlayer] Applying effect chain for track:', track.name);
console.log('[MultiTrackPlayer] Effect chain ID:', track.effectChain.id);
console.log('[MultiTrackPlayer] Effect chain name:', track.effectChain.name);
console.log('[MultiTrackPlayer] Number of effects:', track.effectChain.effects.length);
console.log('[MultiTrackPlayer] Effects:', track.effectChain.effects);
const { outputNode, effectNodes } = applyEffectChain(audioContext, panNode, track.effectChain);
// Connect to master gain
outputNode.connect(masterGain);
console.log('[MultiTrackPlayer] Effect output connected with', effectNodes.length, 'effect nodes');
// Set playback rate
source.playbackRate.value = playbackRateRef.current;
// Start playback from current position
source.start(0, pausedAtRef.current);
// Store references
sourceNodesRef.current.push(source);
gainNodesRef.current.push(gainNode);
panNodesRef.current.push(panNode);
analyserNodesRef.current.push(analyserNode);
effectNodesRef.current.push(effectNodes);
// Handle ended event
source.onended = () => {
if (pausedAtRef.current + (audioContext.currentTime - startTimeRef.current) >= duration) {
setIsPlaying(false);
isMonitoringLevelsRef.current = false;
setCurrentTime(0);
pausedAtRef.current = 0;
setTrackLevels({});
}
};
}
startTimeRef.current = audioContext.currentTime;
isPlayingRef.current = true;
setIsPlaying(true);
updatePlaybackPosition();
// Start level monitoring
isMonitoringLevelsRef.current = true;
monitorPlaybackLevels();
monitorMasterLevels();
// Start automation
applyAutomation();
}, [tracks, duration, masterVolume, updatePlaybackPosition, monitorPlaybackLevels, monitorMasterLevels, applyAutomation]);
const pause = useCallback(() => {
if (!audioContextRef.current || !isPlaying) return;
// Stop all source nodes
sourceNodesRef.current.forEach(node => {
try {
node.stop();
node.disconnect();
} catch (e) {
// Ignore errors
}
});
// Update paused position
const elapsed = audioContextRef.current.currentTime - startTimeRef.current;
pausedAtRef.current = Math.min(pausedAtRef.current + elapsed, duration);
setCurrentTime(pausedAtRef.current);
isPlayingRef.current = false;
setIsPlaying(false);
// Stop level monitoring
isMonitoringLevelsRef.current = false;
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
if (levelMonitorFrameRef.current) {
cancelAnimationFrame(levelMonitorFrameRef.current);
levelMonitorFrameRef.current = null;
}
if (masterLevelMonitorFrameRef.current) {
cancelAnimationFrame(masterLevelMonitorFrameRef.current);
masterLevelMonitorFrameRef.current = null;
}
if (automationFrameRef.current) {
cancelAnimationFrame(automationFrameRef.current);
automationFrameRef.current = null;
}
// Clear track levels
setTrackLevels({});
}, [isPlaying, duration]);
const stop = useCallback(() => {
pause();
pausedAtRef.current = 0;
setCurrentTime(0);
// Clear last recorded values when stopping
lastRecordedValuesRef.current.clear();
}, [pause]);
const seek = useCallback((time: number) => {
const wasPlaying = isPlaying;
if (wasPlaying) {
pause();
}
const clampedTime = Math.max(0, Math.min(time, duration));
pausedAtRef.current = clampedTime;
setCurrentTime(clampedTime);
if (wasPlaying) {
// Small delay to ensure state is updated
setTimeout(() => play(), 10);
}
}, [isPlaying, duration, pause, play]);
const togglePlayPause = useCallback(() => {
if (isPlaying) {
pause();
} else {
play();
}
}, [isPlaying, play, pause]);
// Update gain/pan when tracks change (simple updates that don't require graph rebuild)
useEffect(() => {
if (!isPlaying || !audioContextRef.current) return;
tracks.forEach((track, index) => {
if (gainNodesRef.current[index]) {
const trackGain = getTrackGain(track, tracks);
gainNodesRef.current[index].gain.setValueAtTime(
trackGain,
audioContextRef.current!.currentTime
);
}
if (panNodesRef.current[index]) {
panNodesRef.current[index].pan.setValueAtTime(
track.pan,
audioContextRef.current!.currentTime
);
}
});
}, [tracks, isPlaying]);
// Track effect chain structure to detect add/remove operations
const previousEffectStructureRef = useRef<string | null>(null);
// Detect effect chain structure changes (add/remove/reorder) and restart
useEffect(() => {
if (!isPlaying || !audioContextRef.current) return;
// Create a signature of the current effect structure (IDs and count)
const currentStructure = tracks.map(track =>
track.effectChain.effects.map(e => e.id).join(',')
).join('|');
// If structure changed (effects added/removed/reordered) while playing, restart
// Don't restart if tracks is empty (intermediate state during updates)
if (previousEffectStructureRef.current !== null &&
previousEffectStructureRef.current !== currentStructure &&
tracks.length > 0) {
console.log('[useMultiTrackPlayer] Effect chain structure changed, restarting...');
// Update the reference immediately to prevent re-triggering
previousEffectStructureRef.current = currentStructure;
// Update tracksRef with current tracks BEFORE setTimeout
tracksRef.current = tracks;
// Save current position
const elapsed = audioContextRef.current.currentTime - startTimeRef.current;
const currentPos = pausedAtRef.current + elapsed;
// Stop all source nodes
sourceNodesRef.current.forEach(node => {
try {
node.onended = null;
node.stop();
node.disconnect();
} catch (e) {
// Ignore errors
}
});
// Update position
pausedAtRef.current = currentPos;
setCurrentTime(currentPos);
setIsPlaying(false);
// Clear animation frame
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
// Restart after a brief delay
setTimeout(() => {
// Use tracksRef.current to get the latest tracks, not the stale closure
const latestTracks = tracksRef.current;
if (latestTracks.length === 0 || latestTracks.every(t => !t.audioBuffer)) return;
const audioContext = getAudioContext();
audioContextRef.current = audioContext;
// Disconnect old nodes
gainNodesRef.current.forEach(node => node.disconnect());
panNodesRef.current.forEach(node => node.disconnect());
effectNodesRef.current.forEach(trackEffects => {
trackEffects.forEach(effectNodeInfo => {
if (effectNodeInfo.node) {
try {
effectNodeInfo.node.disconnect();
} catch (e) {
// Ignore
}
}
if (effectNodeInfo.dryGain) effectNodeInfo.dryGain.disconnect();
if (effectNodeInfo.wetGain) effectNodeInfo.wetGain.disconnect();
});
});
if (masterGainNodeRef.current) {
masterGainNodeRef.current.disconnect();
}
// Reset refs
sourceNodesRef.current = [];
gainNodesRef.current = [];
panNodesRef.current = [];
analyserNodesRef.current = [];
effectNodesRef.current = [];
// Create master gain node
const masterGain = audioContext.createGain();
masterGain.gain.setValueAtTime(masterVolume, audioContext.currentTime);
masterGain.connect(audioContext.destination);
masterGainNodeRef.current = masterGain;
// Create audio graph for each track
for (const track of latestTracks) {
if (!track.audioBuffer) continue;
const source = audioContext.createBufferSource();
source.buffer = track.audioBuffer;
const gainNode = audioContext.createGain();
const panNode = audioContext.createStereoPanner();
const analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 256;
analyserNode.smoothingTimeConstant = 0.8;
// Set gain based on track volume and solo/mute state
const trackGain = getTrackGain(track, latestTracks);
gainNode.gain.setValueAtTime(trackGain, audioContext.currentTime);
// Set pan
panNode.pan.setValueAtTime(track.pan, audioContext.currentTime);
// Connect: source -> analyser -> gain -> pan -> effects -> master gain -> destination
// Analyser is before gain so it shows raw audio levels independent of volume fader
source.connect(analyserNode);
analyserNode.connect(gainNode);
gainNode.connect(panNode);
// Apply effect chain
const { outputNode, effectNodes } = applyEffectChain(audioContext, panNode, track.effectChain);
outputNode.connect(masterGain);
// Start playback from current position
source.start(0, pausedAtRef.current);
// Store references
sourceNodesRef.current.push(source);
gainNodesRef.current.push(gainNode);
panNodesRef.current.push(panNode);
analyserNodesRef.current.push(analyserNode);
effectNodesRef.current.push(effectNodes);
// Handle ended event
source.onended = () => {
if (pausedAtRef.current + (audioContext.currentTime - startTimeRef.current) >= duration) {
setIsPlaying(false);
isMonitoringLevelsRef.current = false;
setCurrentTime(0);
pausedAtRef.current = 0;
setTrackLevels({});
}
};
}
startTimeRef.current = audioContext.currentTime;
setIsPlaying(true);
// Start level monitoring
isMonitoringLevelsRef.current = true;
// Start animation frame for position updates
const updatePosition = () => {
if (!audioContextRef.current) return;
const elapsed = audioContextRef.current.currentTime - startTimeRef.current;
const newTime = pausedAtRef.current + elapsed;
if (newTime >= duration) {
setIsPlaying(false);
isMonitoringLevelsRef.current = false;
setCurrentTime(0);
pausedAtRef.current = 0;
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = null;
}
if (levelMonitorFrameRef.current) {
cancelAnimationFrame(levelMonitorFrameRef.current);
levelMonitorFrameRef.current = null;
}
if (masterLevelMonitorFrameRef.current) {
cancelAnimationFrame(masterLevelMonitorFrameRef.current);
masterLevelMonitorFrameRef.current = null;
}
if (automationFrameRef.current) {
cancelAnimationFrame(automationFrameRef.current);
automationFrameRef.current = null;
}
return;
}
setCurrentTime(newTime);
animationFrameRef.current = requestAnimationFrame(updatePosition);
};
updatePosition();
monitorPlaybackLevels();
monitorMasterLevels();
applyAutomation();
}, 10);
}
previousEffectStructureRef.current = currentStructure;
}, [tracks, isPlaying, duration, masterVolume, monitorPlaybackLevels, monitorMasterLevels, applyAutomation]);
// Stop playback when all tracks are deleted
useEffect(() => {
if (!isPlaying) return;
// If tracks become empty or all tracks have no audio buffer, stop playback
if (tracks.length === 0 || tracks.every(t => !t.audioBuffer)) {
console.log('[useMultiTrackPlayer] All tracks deleted, stopping playback');
stop();
}
}, [tracks, isPlaying, stop]);
// Update effect parameters and bypass state in real-time
useEffect(() => {
if (!isPlaying || !audioContextRef.current) return;
tracks.forEach((track, trackIndex) => {
const effectNodes = effectNodesRef.current[trackIndex];
if (!effectNodes) return;
// Only update if we have the same number of effects (no add/remove)
if (effectNodes.length !== track.effectChain.effects.length) return;
track.effectChain.effects.forEach((effect, effectIndex) => {
const effectNodeInfo = effectNodes[effectIndex];
if (!effectNodeInfo) return;
// Update bypass state
if (effect.enabled !== effectNodeInfo.effect.enabled) {
toggleEffectBypass(audioContextRef.current!, effectNodeInfo, effect.enabled);
effectNodeInfo.effect.enabled = effect.enabled;
}
// Update parameters (only works for certain effect types)
if (JSON.stringify(effect.parameters) !== JSON.stringify(effectNodeInfo.effect.parameters)) {
updateEffectParameters(audioContextRef.current!, effectNodeInfo, effect);
effectNodeInfo.effect.parameters = effect.parameters;
}
});
});
}, [tracks, isPlaying]);
// Update master volume when it changes
useEffect(() => {
if (!isPlaying || !audioContextRef.current || !masterGainNodeRef.current) return;
masterGainNodeRef.current.gain.setValueAtTime(
masterVolume,
audioContextRef.current.currentTime
);
}, [masterVolume, isPlaying]);
// Cleanup on unmount
useEffect(() => {
return () => {
isMonitoringLevelsRef.current = false;
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
if (levelMonitorFrameRef.current) {
cancelAnimationFrame(levelMonitorFrameRef.current);
}
if (masterLevelMonitorFrameRef.current) {
cancelAnimationFrame(masterLevelMonitorFrameRef.current);
}
if (automationFrameRef.current) {
cancelAnimationFrame(automationFrameRef.current);
}
sourceNodesRef.current.forEach(node => {
try {
node.stop();
node.disconnect();
} catch (e) {
// Ignore
}
});
gainNodesRef.current.forEach(node => node.disconnect());
panNodesRef.current.forEach(node => node.disconnect());
analyserNodesRef.current.forEach(node => node.disconnect());
if (masterGainNodeRef.current) {
masterGainNodeRef.current.disconnect();
}
};
}, []);
const resetClipIndicator = useCallback(() => {
setMasterIsClipping(false);
}, []);
const toggleLoop = useCallback(() => {
setLoopEnabled(prev => !prev);
}, []);
const setLoopPoints = useCallback((start: number, end: number) => {
setLoopStart(Math.max(0, start));
setLoopEnd(Math.min(duration, Math.max(start, end)));
}, [duration]);
const setLoopFromSelection = useCallback((selectionStart: number, selectionEnd: number) => {
if (selectionStart < selectionEnd) {
setLoopPoints(selectionStart, selectionEnd);
setLoopEnabled(true);
}
}, [setLoopPoints]);
const changePlaybackRate = useCallback((rate: number) => {
// Clamp rate between 0.25x and 2x
const clampedRate = Math.max(0.25, Math.min(2.0, rate));
setPlaybackRate(clampedRate);
// Update playback rate on all active source nodes
sourceNodesRef.current.forEach(source => {
source.playbackRate.value = clampedRate;
});
}, []);
return {
isPlaying,
currentTime,
duration,
trackLevels,
masterPeakLevel,
masterRmsLevel,
masterIsClipping,
masterAnalyser: masterAnalyserRef.current,
resetClipIndicator,
play,
pause,
stop,
seek,
togglePlayPause,
loopEnabled,
loopStart,
loopEnd,
toggleLoop,
setLoopPoints,
setLoopFromSelection,
playbackRate,
changePlaybackRate,
};
}

353
lib/hooks/useRecording.ts Normal file
View File

@@ -0,0 +1,353 @@
'use client';
import * as React from 'react';
export interface RecordingState {
isRecording: boolean;
isPaused: boolean;
duration: number;
inputLevel: number;
}
export interface RecordingSettings {
inputGain: number; // 0.0 to 2.0 (1.0 = unity)
recordMono: boolean; // true = mono, false = stereo
sampleRate: number; // target sample rate (44100, 48000, etc.)
}
export interface UseRecordingReturn {
state: RecordingState;
settings: RecordingSettings;
startRecording: () => Promise<void>;
stopRecording: () => Promise<AudioBuffer | null>;
pauseRecording: () => void;
resumeRecording: () => void;
getInputDevices: () => Promise<MediaDeviceInfo[]>;
selectInputDevice: (deviceId: string) => Promise<void>;
requestPermission: () => Promise<boolean>;
setInputGain: (gain: number) => void;
setRecordMono: (mono: boolean) => void;
setSampleRate: (sampleRate: number) => void;
}
export function useRecording(): UseRecordingReturn {
const [state, setState] = React.useState<RecordingState>({
isRecording: false,
isPaused: false,
duration: 0,
inputLevel: 0,
});
const [settings, setSettings] = React.useState<RecordingSettings>({
inputGain: 1.0,
recordMono: false,
sampleRate: 48000,
});
const mediaRecorderRef = React.useRef<MediaRecorder | null>(null);
const audioContextRef = React.useRef<AudioContext | null>(null);
const analyserRef = React.useRef<AnalyserNode | null>(null);
const gainNodeRef = React.useRef<GainNode | null>(null);
const streamRef = React.useRef<MediaStream | null>(null);
const chunksRef = React.useRef<Blob[]>([]);
const startTimeRef = React.useRef<number>(0);
const animationFrameRef = React.useRef<number>(0);
const selectedDeviceIdRef = React.useRef<string>('');
const isMonitoringRef = React.useRef<boolean>(false);
// Request microphone permission
const requestPermission = React.useCallback(async (): Promise<boolean> => {
try {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
stream.getTracks().forEach((track) => track.stop());
return true;
} catch (error) {
console.error('Microphone permission denied:', error);
return false;
}
}, []);
// Get available input devices
const getInputDevices = React.useCallback(async (): Promise<MediaDeviceInfo[]> => {
try {
const devices = await navigator.mediaDevices.enumerateDevices();
return devices.filter((device) => device.kind === 'audioinput');
} catch (error) {
console.error('Failed to enumerate devices:', error);
return [];
}
}, []);
// Select input device
const selectInputDevice = React.useCallback(async (deviceId: string): Promise<void> => {
selectedDeviceIdRef.current = deviceId;
}, []);
// Convert linear amplitude to dB scale normalized to 0-1 range
const linearToDbScale = React.useCallback((linear: number): number => {
if (linear === 0) return 0;
// Convert to dB (20 * log10(linear))
const db = 20 * Math.log10(linear);
// Normalize -60dB to 0dB range to 0-1
// -60dB or lower = 0%, 0dB = 100%
const minDb = -60;
const maxDb = 0;
const normalized = (db - minDb) / (maxDb - minDb);
// Clamp to 0-1 range
return Math.max(0, Math.min(1, normalized));
}, []);
// Monitor input level
const monitorInputLevel = React.useCallback(() => {
if (!analyserRef.current) return;
const analyser = analyserRef.current;
const dataArray = new Float32Array(analyser.fftSize);
const updateLevel = () => {
if (!isMonitoringRef.current) return;
analyser.getFloatTimeDomainData(dataArray);
// Calculate peak level using float data (-1 to +1 range)
let peak = 0;
for (let i = 0; i < dataArray.length; i++) {
const abs = Math.abs(dataArray[i]);
if (abs > peak) {
peak = abs;
}
}
// Convert linear peak to logarithmic dB scale
const dbLevel = linearToDbScale(peak);
setState((prev) => ({ ...prev, inputLevel: dbLevel }));
animationFrameRef.current = requestAnimationFrame(updateLevel);
};
updateLevel();
}, [linearToDbScale]);
// Start recording
const startRecording = React.useCallback(async (): Promise<void> => {
try {
// Get user media with selected device
const constraints: MediaStreamConstraints = {
audio: selectedDeviceIdRef.current
? { deviceId: { exact: selectedDeviceIdRef.current } }
: true,
};
const stream = await navigator.mediaDevices.getUserMedia(constraints);
streamRef.current = stream;
// Create audio context with target sample rate
const audioContext = new AudioContext({ sampleRate: settings.sampleRate });
audioContextRef.current = audioContext;
const source = audioContext.createMediaStreamSource(stream);
// Create gain node for input gain control
const gainNode = audioContext.createGain();
gainNode.gain.value = settings.inputGain;
gainNodeRef.current = gainNode;
// Create analyser for level monitoring
const analyser = audioContext.createAnalyser();
analyser.fftSize = 256;
analyser.smoothingTimeConstant = 0.3;
// Connect: source -> gain -> analyser
source.connect(gainNode);
gainNode.connect(analyser);
analyserRef.current = analyser;
// Create MediaRecorder
const mediaRecorder = new MediaRecorder(stream);
mediaRecorderRef.current = mediaRecorder;
chunksRef.current = [];
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
chunksRef.current.push(event.data);
}
};
// Start recording
mediaRecorder.start();
startTimeRef.current = Date.now();
setState({
isRecording: true,
isPaused: false,
duration: 0,
inputLevel: 0,
});
// Start monitoring input level
isMonitoringRef.current = true;
monitorInputLevel();
} catch (error) {
console.error('Failed to start recording:', error);
throw error;
}
}, [monitorInputLevel, settings.sampleRate, settings.inputGain]);
// Stop recording and return AudioBuffer
const stopRecording = React.useCallback(async (): Promise<AudioBuffer | null> => {
return new Promise((resolve) => {
if (!mediaRecorderRef.current || !streamRef.current) {
resolve(null);
return;
}
const mediaRecorder = mediaRecorderRef.current;
mediaRecorder.onstop = async () => {
// Stop all tracks
streamRef.current?.getTracks().forEach((track) => track.stop());
// Create blob from recorded chunks
const blob = new Blob(chunksRef.current, { type: 'audio/webm' });
// Convert blob to AudioBuffer
try {
const arrayBuffer = await blob.arrayBuffer();
const audioContext = new AudioContext();
let audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Convert to mono if requested
if (settings.recordMono && audioBuffer.numberOfChannels > 1) {
const monoBuffer = audioContext.createBuffer(
1,
audioBuffer.length,
audioBuffer.sampleRate
);
const monoData = monoBuffer.getChannelData(0);
// Mix all channels to mono by averaging
for (let i = 0; i < audioBuffer.length; i++) {
let sum = 0;
for (let channel = 0; channel < audioBuffer.numberOfChannels; channel++) {
sum += audioBuffer.getChannelData(channel)[i];
}
monoData[i] = sum / audioBuffer.numberOfChannels;
}
audioBuffer = monoBuffer;
}
// Clean up
isMonitoringRef.current = false;
if (audioContextRef.current) {
await audioContextRef.current.close();
}
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
setState({
isRecording: false,
isPaused: false,
duration: 0,
inputLevel: 0,
});
resolve(audioBuffer);
} catch (error) {
console.error('Failed to decode recorded audio:', error);
resolve(null);
}
};
mediaRecorder.stop();
});
}, [settings.recordMono]);
// Pause recording
const pauseRecording = React.useCallback(() => {
if (mediaRecorderRef.current && state.isRecording && !state.isPaused) {
mediaRecorderRef.current.pause();
setState((prev) => ({ ...prev, isPaused: true }));
isMonitoringRef.current = false;
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
}
}, [state.isRecording, state.isPaused]);
// Resume recording
const resumeRecording = React.useCallback(() => {
if (mediaRecorderRef.current && state.isRecording && state.isPaused) {
mediaRecorderRef.current.resume();
setState((prev) => ({ ...prev, isPaused: false }));
isMonitoringRef.current = true;
monitorInputLevel();
}
}, [state.isRecording, state.isPaused, monitorInputLevel]);
// Update duration
React.useEffect(() => {
if (!state.isRecording || state.isPaused) return;
const interval = setInterval(() => {
const elapsed = (Date.now() - startTimeRef.current) / 1000;
setState((prev) => ({ ...prev, duration: elapsed }));
}, 100);
return () => clearInterval(interval);
}, [state.isRecording, state.isPaused]);
// Cleanup on unmount
React.useEffect(() => {
return () => {
isMonitoringRef.current = false;
if (streamRef.current) {
streamRef.current.getTracks().forEach((track) => track.stop());
}
if (audioContextRef.current) {
audioContextRef.current.close();
}
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
}
};
}, []);
// Settings setters
const setInputGain = React.useCallback((gain: number) => {
setSettings((prev) => ({ ...prev, inputGain: Math.max(0, Math.min(2, gain)) }));
// Update gain node if recording
if (gainNodeRef.current) {
gainNodeRef.current.gain.value = Math.max(0, Math.min(2, gain));
}
}, []);
const setRecordMono = React.useCallback((mono: boolean) => {
setSettings((prev) => ({ ...prev, recordMono: mono }));
}, []);
const setSampleRate = React.useCallback((sampleRate: number) => {
setSettings((prev) => ({ ...prev, sampleRate }));
}, []);
return {
state,
settings,
startRecording,
stopRecording,
pauseRecording,
resumeRecording,
getInputDevices,
selectInputDevice,
requestPermission,
setInputGain,
setRecordMono,
setSampleRate,
};
}

152
lib/hooks/useSettings.ts Normal file
View File

@@ -0,0 +1,152 @@
'use client';
import { useState, useEffect, useCallback } from 'react';
export interface AudioSettings {
bufferSize: number; // 256, 512, 1024, 2048, 4096
sampleRate: number; // 44100, 48000, 96000
autoNormalizeOnImport: boolean;
}
export interface UISettings {
theme: 'dark' | 'light' | 'auto';
fontSize: 'small' | 'medium' | 'large';
}
export interface EditorSettings {
autoSaveInterval: number; // seconds, 0 = disabled
undoHistoryLimit: number; // 10-200
snapToGrid: boolean;
gridResolution: number; // seconds
defaultZoom: number; // 1-20
}
export interface PerformanceSettings {
peakCalculationQuality: 'low' | 'medium' | 'high';
waveformRenderingQuality: 'low' | 'medium' | 'high';
enableSpectrogram: boolean;
maxFileSizeMB: number; // 100-1000
}
export interface Settings {
audio: AudioSettings;
ui: UISettings;
editor: EditorSettings;
performance: PerformanceSettings;
}
const DEFAULT_SETTINGS: Settings = {
audio: {
bufferSize: 2048,
sampleRate: 48000,
autoNormalizeOnImport: false,
},
ui: {
theme: 'dark',
fontSize: 'medium',
},
editor: {
autoSaveInterval: 3, // 3 seconds
undoHistoryLimit: 50,
snapToGrid: false,
gridResolution: 1.0, // 1 second
defaultZoom: 1,
},
performance: {
peakCalculationQuality: 'high',
waveformRenderingQuality: 'high',
enableSpectrogram: true,
maxFileSizeMB: 500,
},
};
const SETTINGS_STORAGE_KEY = 'audio-editor-settings';
function loadSettings(): Settings {
if (typeof window === 'undefined') return DEFAULT_SETTINGS;
try {
const stored = localStorage.getItem(SETTINGS_STORAGE_KEY);
if (!stored) return DEFAULT_SETTINGS;
const parsed = JSON.parse(stored);
// Merge with defaults to handle new settings added in updates
return {
audio: { ...DEFAULT_SETTINGS.audio, ...parsed.audio },
ui: { ...DEFAULT_SETTINGS.ui, ...parsed.ui },
editor: { ...DEFAULT_SETTINGS.editor, ...parsed.editor },
performance: { ...DEFAULT_SETTINGS.performance, ...parsed.performance },
};
} catch (error) {
console.error('Failed to load settings from localStorage:', error);
return DEFAULT_SETTINGS;
}
}
function saveSettings(settings: Settings): void {
if (typeof window === 'undefined') return;
try {
localStorage.setItem(SETTINGS_STORAGE_KEY, JSON.stringify(settings));
} catch (error) {
console.error('Failed to save settings to localStorage:', error);
}
}
export function useSettings() {
const [settings, setSettings] = useState<Settings>(loadSettings);
// Save to localStorage whenever settings change
useEffect(() => {
saveSettings(settings);
}, [settings]);
const updateAudioSettings = useCallback((updates: Partial<AudioSettings>) => {
setSettings((prev) => ({
...prev,
audio: { ...prev.audio, ...updates },
}));
}, []);
const updateUISettings = useCallback((updates: Partial<UISettings>) => {
setSettings((prev) => ({
...prev,
ui: { ...prev.ui, ...updates },
}));
}, []);
const updateEditorSettings = useCallback((updates: Partial<EditorSettings>) => {
setSettings((prev) => ({
...prev,
editor: { ...prev.editor, ...updates },
}));
}, []);
const updatePerformanceSettings = useCallback((updates: Partial<PerformanceSettings>) => {
setSettings((prev) => ({
...prev,
performance: { ...prev.performance, ...updates },
}));
}, []);
const resetSettings = useCallback(() => {
setSettings(DEFAULT_SETTINGS);
}, []);
const resetCategory = useCallback((category: keyof Settings) => {
setSettings((prev) => ({
...prev,
[category]: DEFAULT_SETTINGS[category],
}));
}, []);
return {
settings,
updateAudioSettings,
updateUISettings,
updateEditorSettings,
updatePerformanceSettings,
resetSettings,
resetCategory,
};
}

193
lib/storage/db.ts Normal file
View File

@@ -0,0 +1,193 @@
/**
* IndexedDB database for project storage
*/
export const DB_NAME = 'audio-editor-db';
export const DB_VERSION = 1;
export interface ProjectMetadata {
id: string;
name: string;
description?: string;
createdAt: number;
updatedAt: number;
duration: number; // Total project duration in seconds
sampleRate: number;
trackCount: number;
thumbnail?: string; // Base64 encoded waveform thumbnail
}
export interface SerializedAudioBuffer {
sampleRate: number;
length: number;
numberOfChannels: number;
channelData: Float32Array[]; // Array of channel data
}
export interface SerializedTrack {
id: string;
name: string;
color: string;
volume: number;
pan: number;
muted: boolean;
soloed: boolean;
collapsed: boolean;
height: number;
audioBuffer: SerializedAudioBuffer | null;
effects: any[]; // Effect chain
automation: any; // Automation data
recordEnabled: boolean;
}
export interface ProjectData {
metadata: ProjectMetadata;
tracks: SerializedTrack[];
settings: {
zoom: number;
currentTime: number;
sampleRate: number;
};
}
/**
* Initialize IndexedDB database
*/
export function initDB(): Promise<IDBDatabase> {
return new Promise((resolve, reject) => {
const request = indexedDB.open(DB_NAME, DB_VERSION);
request.onerror = () => reject(request.error);
request.onsuccess = () => resolve(request.result);
request.onupgradeneeded = (event) => {
const db = (event.target as IDBOpenDBRequest).result;
// Create projects object store
if (!db.objectStoreNames.contains('projects')) {
const projectStore = db.createObjectStore('projects', { keyPath: 'metadata.id' });
projectStore.createIndex('updatedAt', 'metadata.updatedAt', { unique: false });
projectStore.createIndex('name', 'metadata.name', { unique: false });
}
// Create audio buffers object store (for large files)
if (!db.objectStoreNames.contains('audioBuffers')) {
db.createObjectStore('audioBuffers', { keyPath: 'id' });
}
};
});
}
/**
* Get all projects (metadata only for list view)
*/
export async function getAllProjects(): Promise<ProjectMetadata[]> {
const db = await initDB();
return new Promise((resolve, reject) => {
const transaction = db.transaction(['projects'], 'readonly');
const store = transaction.objectStore('projects');
const index = store.index('updatedAt');
const request = index.openCursor(null, 'prev'); // Most recent first
const projects: ProjectMetadata[] = [];
request.onsuccess = () => {
const cursor = request.result;
if (cursor) {
projects.push(cursor.value.metadata);
cursor.continue();
} else {
resolve(projects);
}
};
request.onerror = () => reject(request.error);
});
}
/**
* Save project to IndexedDB
*/
export async function saveProject(project: ProjectData): Promise<void> {
const db = await initDB();
return new Promise((resolve, reject) => {
const transaction = db.transaction(['projects'], 'readwrite');
const store = transaction.objectStore('projects');
const request = store.put(project);
request.onsuccess = () => resolve();
request.onerror = () => reject(request.error);
});
}
/**
* Load project from IndexedDB
*/
export async function loadProject(projectId: string): Promise<ProjectData | null> {
const db = await initDB();
return new Promise((resolve, reject) => {
const transaction = db.transaction(['projects'], 'readonly');
const store = transaction.objectStore('projects');
const request = store.get(projectId);
request.onsuccess = () => resolve(request.result || null);
request.onerror = () => reject(request.error);
});
}
/**
* Delete project from IndexedDB
*/
export async function deleteProject(projectId: string): Promise<void> {
const db = await initDB();
return new Promise((resolve, reject) => {
const transaction = db.transaction(['projects'], 'readwrite');
const store = transaction.objectStore('projects');
const request = store.delete(projectId);
request.onsuccess = () => resolve();
request.onerror = () => reject(request.error);
});
}
/**
* Serialize AudioBuffer for storage
*/
export function serializeAudioBuffer(buffer: AudioBuffer): SerializedAudioBuffer {
const channelData: Float32Array[] = [];
for (let i = 0; i < buffer.numberOfChannels; i++) {
channelData.push(new Float32Array(buffer.getChannelData(i)));
}
return {
sampleRate: buffer.sampleRate,
length: buffer.length,
numberOfChannels: buffer.numberOfChannels,
channelData,
};
}
/**
* Deserialize AudioBuffer from storage
*/
export function deserializeAudioBuffer(
serialized: SerializedAudioBuffer,
audioContext: AudioContext
): AudioBuffer {
const buffer = audioContext.createBuffer(
serialized.numberOfChannels,
serialized.length,
serialized.sampleRate
);
for (let i = 0; i < serialized.numberOfChannels; i++) {
buffer.copyToChannel(new Float32Array(serialized.channelData[i]), i);
}
return buffer;
}

337
lib/storage/projects.ts Normal file
View File

@@ -0,0 +1,337 @@
/**
* Project management service
*/
import type { Track } from '@/types/track';
import {
saveProject,
loadProject,
getAllProjects,
deleteProject,
serializeAudioBuffer,
deserializeAudioBuffer,
type ProjectData,
type SerializedTrack,
type SerializedAudioBuffer,
} from './db';
import type { ProjectMetadata } from './db';
import { getAudioContext } from '../audio/context';
import { generateId } from '../audio/effects/chain';
// Re-export ProjectMetadata for easier importing
export type { ProjectMetadata } from './db';
/**
* Generate unique project ID
*/
export function generateProjectId(): string {
return `project_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
}
/**
* Serialize effects by removing any non-serializable data (functions, nodes, etc.)
*/
function serializeEffects(effects: any[]): any[] {
return effects.map(effect => ({
id: effect.id,
type: effect.type,
name: effect.name,
enabled: effect.enabled,
expanded: effect.expanded,
parameters: effect.parameters ? JSON.parse(JSON.stringify(effect.parameters)) : undefined,
}));
}
/**
* Convert tracks to serialized format
*/
function serializeTracks(tracks: Track[]): SerializedTrack[] {
return tracks.map(track => {
// Serialize automation by deep cloning to remove any functions
const automation = track.automation ? JSON.parse(JSON.stringify(track.automation)) : { lanes: [], showAutomation: false };
return {
id: track.id,
name: track.name,
color: track.color,
volume: track.volume,
pan: track.pan,
muted: track.mute,
soloed: track.solo,
collapsed: track.collapsed,
height: track.height,
audioBuffer: track.audioBuffer ? serializeAudioBuffer(track.audioBuffer) : null,
effects: serializeEffects(track.effectChain?.effects || []),
automation,
recordEnabled: track.recordEnabled,
};
});
}
/**
* Convert serialized tracks back to Track format
*/
function deserializeTracks(serialized: SerializedTrack[]): Track[] {
const audioContext = getAudioContext();
return serialized.map(track => ({
id: track.id,
name: track.name,
color: track.color,
volume: track.volume,
pan: track.pan,
mute: track.muted,
solo: track.soloed,
collapsed: track.collapsed,
height: track.height,
audioBuffer: track.audioBuffer ? deserializeAudioBuffer(track.audioBuffer, audioContext) : null,
effectChain: {
id: generateId(),
name: `${track.name} FX`,
effects: track.effects,
},
automation: track.automation,
recordEnabled: track.recordEnabled,
selected: false,
showEffects: false,
selection: null, // Reset selection on load
}));
}
/**
* Calculate total project duration
*/
function calculateDuration(tracks: Track[]): number {
let maxDuration = 0;
for (const track of tracks) {
if (track.audioBuffer) {
maxDuration = Math.max(maxDuration, track.audioBuffer.duration);
}
}
return maxDuration;
}
/**
* Save current project state
*/
export async function saveCurrentProject(
projectId: string | null,
projectName: string,
tracks: Track[],
settings: {
zoom: number;
currentTime: number;
sampleRate: number;
},
description?: string
): Promise<string> {
const id = projectId || generateProjectId();
const now = Date.now();
const metadata: ProjectMetadata = {
id,
name: projectName,
description,
createdAt: projectId ? (await loadProject(id))?.metadata.createdAt || now : now,
updatedAt: now,
duration: calculateDuration(tracks),
sampleRate: settings.sampleRate,
trackCount: tracks.length,
};
const projectData: ProjectData = {
metadata,
tracks: serializeTracks(tracks),
settings,
};
await saveProject(projectData);
return id;
}
/**
* Load project and restore state
*/
export async function loadProjectById(projectId: string): Promise<{
tracks: Track[];
settings: {
zoom: number;
currentTime: number;
sampleRate: number;
};
metadata: ProjectMetadata;
} | null> {
const project = await loadProject(projectId);
if (!project) return null;
return {
tracks: deserializeTracks(project.tracks),
settings: project.settings,
metadata: project.metadata,
};
}
/**
* Get list of all projects
*/
export async function listProjects(): Promise<ProjectMetadata[]> {
return getAllProjects();
}
/**
* Delete a project
*/
export async function removeProject(projectId: string): Promise<void> {
return deleteProject(projectId);
}
/**
* Duplicate a project
*/
export async function duplicateProject(sourceProjectId: string, newName: string): Promise<string> {
const project = await loadProject(sourceProjectId);
if (!project) throw new Error('Project not found');
const newId = generateProjectId();
const now = Date.now();
const newProject: ProjectData = {
...project,
metadata: {
...project.metadata,
id: newId,
name: newName,
createdAt: now,
updatedAt: now,
},
};
await saveProject(newProject);
return newId;
}
/**
* Export project as ZIP file with separate audio files
*/
export async function exportProjectAsJSON(projectId: string): Promise<void> {
const JSZip = (await import('jszip')).default;
const project = await loadProject(projectId);
if (!project) throw new Error('Project not found');
const zip = new JSZip();
const audioContext = getAudioContext();
// Create metadata without audio buffers
const metadata = {
...project,
tracks: project.tracks.map((track, index) => ({
...track,
audioBuffer: track.audioBuffer ? {
fileName: `track_${index}.wav`,
sampleRate: track.audioBuffer.sampleRate,
length: track.audioBuffer.length,
numberOfChannels: track.audioBuffer.numberOfChannels,
} : null,
})),
};
// Add project.json to ZIP
zip.file('project.json', JSON.stringify(metadata, null, 2));
// Convert audio buffers to WAV and add to ZIP
for (let i = 0; i < project.tracks.length; i++) {
const track = project.tracks[i];
if (track.audioBuffer) {
// Deserialize audio buffer
const buffer = deserializeAudioBuffer(track.audioBuffer, audioContext);
// Convert to WAV
const { audioBufferToWav } = await import('@/lib/audio/export');
const wavBlob = await audioBufferToWav(buffer);
// Add to ZIP
zip.file(`track_${i}.wav`, wavBlob);
}
}
// Generate ZIP and download
const zipBlob = await zip.generateAsync({ type: 'blob' });
const url = URL.createObjectURL(zipBlob);
const a = document.createElement('a');
a.href = url;
a.download = `${project.metadata.name.replace(/[^a-z0-9]/gi, '_').toLowerCase()}_${Date.now()}.zip`;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
}
/**
* Import project from ZIP file
*/
export async function importProjectFromJSON(file: File): Promise<string> {
const JSZip = (await import('jszip')).default;
try {
const zip = await JSZip.loadAsync(file);
// Read project.json
const projectJsonFile = zip.file('project.json');
if (!projectJsonFile) throw new Error('Invalid project file: missing project.json');
const projectJson = await projectJsonFile.async('text');
const metadata = JSON.parse(projectJson);
// Read audio files and reconstruct tracks
const audioContext = getAudioContext();
const tracks: SerializedTrack[] = [];
for (let i = 0; i < metadata.tracks.length; i++) {
const trackMeta = metadata.tracks[i];
let audioBuffer: SerializedAudioBuffer | null = null;
if (trackMeta.audioBuffer?.fileName) {
const audioFile = zip.file(trackMeta.audioBuffer.fileName);
if (audioFile) {
// Read WAV file as array buffer
const arrayBuffer = await audioFile.async('arraybuffer');
// Decode audio data
const decodedBuffer = await audioContext.decodeAudioData(arrayBuffer);
// Serialize for storage
audioBuffer = serializeAudioBuffer(decodedBuffer);
}
}
tracks.push({
...trackMeta,
audioBuffer,
});
}
// Generate new ID to avoid conflicts
const newId = generateProjectId();
const now = Date.now();
const importedProject: ProjectData = {
...metadata,
tracks,
metadata: {
...metadata.metadata,
id: newId,
name: `${metadata.metadata.name} (Imported)`,
createdAt: now,
updatedAt: now,
},
};
await saveProject(importedProject);
return newId;
} catch (error) {
console.error('Import error:', error);
throw new Error('Failed to import project file');
}
}

149
lib/utils/audio-cleanup.ts Normal file
View File

@@ -0,0 +1,149 @@
/**
* Audio cleanup utilities to prevent memory leaks
*/
/**
* Safely disconnect and cleanup an AudioNode
*/
export function cleanupAudioNode(node: AudioNode | null | undefined): void {
if (!node) return;
try {
node.disconnect();
} catch (error) {
// Node may already be disconnected, ignore error
console.debug('AudioNode cleanup error (expected):', error);
}
}
/**
* Cleanup multiple audio nodes
*/
export function cleanupAudioNodes(nodes: Array<AudioNode | null | undefined>): void {
nodes.forEach(cleanupAudioNode);
}
/**
* Safely stop and cleanup an AudioBufferSourceNode
*/
export function cleanupAudioSource(source: AudioBufferSourceNode | null | undefined): void {
if (!source) return;
try {
source.stop();
} catch (error) {
// Source may already be stopped, ignore error
console.debug('AudioSource stop error (expected):', error);
}
cleanupAudioNode(source);
}
/**
* Cleanup canvas and release resources
*/
export function cleanupCanvas(canvas: HTMLCanvasElement | null | undefined): void {
if (!canvas) return;
const ctx = canvas.getContext('2d');
if (ctx) {
// Clear the canvas
ctx.clearRect(0, 0, canvas.width, canvas.height);
// Reset transform
ctx.setTransform(1, 0, 0, 1, 0, 0);
}
// Release context (helps with memory)
canvas.width = 0;
canvas.height = 0;
}
/**
* Cancel animation frame safely
*/
export function cleanupAnimationFrame(frameId: number | null | undefined): void {
if (frameId !== null && frameId !== undefined) {
cancelAnimationFrame(frameId);
}
}
/**
* Cleanup media stream tracks
*/
export function cleanupMediaStream(stream: MediaStream | null | undefined): void {
if (!stream) return;
stream.getTracks().forEach(track => {
track.stop();
});
}
/**
* Create a cleanup registry for managing multiple cleanup tasks
*/
export class CleanupRegistry {
private cleanupTasks: Array<() => void> = [];
/**
* Register a cleanup task
*/
register(cleanup: () => void): void {
this.cleanupTasks.push(cleanup);
}
/**
* Register an audio node for cleanup
*/
registerAudioNode(node: AudioNode): void {
this.register(() => cleanupAudioNode(node));
}
/**
* Register an audio source for cleanup
*/
registerAudioSource(source: AudioBufferSourceNode): void {
this.register(() => cleanupAudioSource(source));
}
/**
* Register a canvas for cleanup
*/
registerCanvas(canvas: HTMLCanvasElement): void {
this.register(() => cleanupCanvas(canvas));
}
/**
* Register an animation frame for cleanup
*/
registerAnimationFrame(frameId: number): void {
this.register(() => cleanupAnimationFrame(frameId));
}
/**
* Register a media stream for cleanup
*/
registerMediaStream(stream: MediaStream): void {
this.register(() => cleanupMediaStream(stream));
}
/**
* Execute all cleanup tasks and clear the registry
*/
cleanup(): void {
this.cleanupTasks.forEach(task => {
try {
task();
} catch (error) {
console.error('Cleanup task failed:', error);
}
});
this.cleanupTasks = [];
}
/**
* Get the number of registered cleanup tasks
*/
get size(): number {
return this.cleanupTasks.length;
}
}

128
lib/utils/browser-compat.ts Normal file
View File

@@ -0,0 +1,128 @@
/**
* Browser compatibility checking utilities
*/
export interface BrowserCompatibility {
isSupported: boolean;
missingFeatures: string[];
warnings: string[];
}
/**
* Check if all required browser features are supported
*/
export function checkBrowserCompatibility(): BrowserCompatibility {
const missingFeatures: string[] = [];
const warnings: string[] = [];
// Check if running in browser
if (typeof window === 'undefined') {
return {
isSupported: true,
missingFeatures: [],
warnings: [],
};
}
// Check Web Audio API
if (!window.AudioContext && !(window as any).webkitAudioContext) {
missingFeatures.push('Web Audio API');
}
// Check IndexedDB
if (!window.indexedDB) {
missingFeatures.push('IndexedDB');
}
// Check localStorage
try {
localStorage.setItem('test', 'test');
localStorage.removeItem('test');
} catch (e) {
missingFeatures.push('LocalStorage');
}
// Check Canvas API
const canvas = document.createElement('canvas');
if (!canvas.getContext || !canvas.getContext('2d')) {
missingFeatures.push('Canvas API');
}
// Check MediaDevices API (for recording)
if (!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia) {
warnings.push('Microphone recording not supported (requires HTTPS or localhost)');
}
// Check File API
if (!window.File || !window.FileReader || !window.FileList || !window.Blob) {
missingFeatures.push('File API');
}
// Check OfflineAudioContext
if (!window.OfflineAudioContext && !(window as any).webkitOfflineAudioContext) {
missingFeatures.push('OfflineAudioContext (required for audio processing)');
}
return {
isSupported: missingFeatures.length === 0,
missingFeatures,
warnings,
};
}
/**
* Get user-friendly browser name
*/
export function getBrowserInfo(): { name: string; version: string } {
// Check if running in browser
if (typeof window === 'undefined' || typeof navigator === 'undefined') {
return { name: 'Unknown', version: 'Unknown' };
}
const userAgent = navigator.userAgent;
let name = 'Unknown';
let version = 'Unknown';
if (userAgent.indexOf('Chrome') > -1 && userAgent.indexOf('Edg') === -1) {
name = 'Chrome';
const match = userAgent.match(/Chrome\/(\d+)/);
version = match ? match[1] : 'Unknown';
} else if (userAgent.indexOf('Edg') > -1) {
name = 'Edge';
const match = userAgent.match(/Edg\/(\d+)/);
version = match ? match[1] : 'Unknown';
} else if (userAgent.indexOf('Firefox') > -1) {
name = 'Firefox';
const match = userAgent.match(/Firefox\/(\d+)/);
version = match ? match[1] : 'Unknown';
} else if (userAgent.indexOf('Safari') > -1 && userAgent.indexOf('Chrome') === -1) {
name = 'Safari';
const match = userAgent.match(/Version\/(\d+)/);
version = match ? match[1] : 'Unknown';
}
return { name, version };
}
/**
* Check if browser version meets minimum requirements
*/
export function checkMinimumVersion(): boolean {
const { name, version } = getBrowserInfo();
const versionNum = parseInt(version, 10);
const minimumVersions: Record<string, number> = {
Chrome: 90,
Edge: 90,
Firefox: 88,
Safari: 14,
};
const minVersion = minimumVersions[name];
if (!minVersion) {
// Unknown browser, assume it's ok
return true;
}
return versionNum >= minVersion;
}

160
lib/utils/memory-limits.ts Normal file
View File

@@ -0,0 +1,160 @@
/**
* Memory limit checking utilities for audio file handling
*/
export interface MemoryCheckResult {
allowed: boolean;
warning?: string;
estimatedMemoryMB: number;
availableMemoryMB?: number;
}
/**
* Estimate memory required for an audio buffer
* @param duration Duration in seconds
* @param sampleRate Sample rate (default: 48000 Hz)
* @param channels Number of channels (default: 2 for stereo)
* @returns Estimated memory in MB
*/
export function estimateAudioMemory(
duration: number,
sampleRate: number = 48000,
channels: number = 2
): number {
// Each sample is a 32-bit float (4 bytes)
const bytesPerSample = 4;
const totalSamples = duration * sampleRate * channels;
const bytes = totalSamples * bytesPerSample;
// Convert to MB
return bytes / (1024 * 1024);
}
/**
* Get available device memory if supported
* @returns Available memory in MB, or undefined if not supported
*/
export function getAvailableMemory(): number | undefined {
if (typeof navigator === 'undefined') return undefined;
// @ts-ignore - deviceMemory is not in TypeScript types yet
const deviceMemory = navigator.deviceMemory;
if (typeof deviceMemory === 'number') {
// deviceMemory is in GB, convert to MB
return deviceMemory * 1024;
}
return undefined;
}
/**
* Check if a file size is within safe memory limits
* @param fileSizeBytes File size in bytes
* @returns Memory check result
*/
export function checkFileMemoryLimit(fileSizeBytes: number): MemoryCheckResult {
// Estimate memory usage (audio files decompress to ~10x their size)
const estimatedMemoryMB = (fileSizeBytes / (1024 * 1024)) * 10;
const availableMemoryMB = getAvailableMemory();
// Conservative limits
const WARN_THRESHOLD_MB = 100; // Warn if file will use > 100MB
const MAX_RECOMMENDED_MB = 500; // Don't recommend files > 500MB
if (estimatedMemoryMB > MAX_RECOMMENDED_MB) {
return {
allowed: false,
warning: `This file may require ${Math.round(estimatedMemoryMB)}MB of memory. ` +
`Files larger than ${MAX_RECOMMENDED_MB}MB are not recommended as they may cause performance issues or crashes.`,
estimatedMemoryMB,
availableMemoryMB,
};
}
if (estimatedMemoryMB > WARN_THRESHOLD_MB) {
const warning = availableMemoryMB
? `This file will require approximately ${Math.round(estimatedMemoryMB)}MB of memory. ` +
`Your device has ${Math.round(availableMemoryMB)}MB available.`
: `This file will require approximately ${Math.round(estimatedMemoryMB)}MB of memory. ` +
`Large files may cause performance issues on devices with limited memory.`;
return {
allowed: true,
warning,
estimatedMemoryMB,
availableMemoryMB,
};
}
return {
allowed: true,
estimatedMemoryMB,
availableMemoryMB,
};
}
/**
* Check if an audio buffer is within safe memory limits
* @param duration Duration in seconds
* @param sampleRate Sample rate
* @param channels Number of channels
* @returns Memory check result
*/
export function checkAudioBufferMemoryLimit(
duration: number,
sampleRate: number = 48000,
channels: number = 2
): MemoryCheckResult {
const estimatedMemoryMB = estimateAudioMemory(duration, sampleRate, channels);
const availableMemoryMB = getAvailableMemory();
const WARN_THRESHOLD_MB = 100;
const MAX_RECOMMENDED_MB = 500;
if (estimatedMemoryMB > MAX_RECOMMENDED_MB) {
return {
allowed: false,
warning: `This audio (${Math.round(duration / 60)} minutes) will require ${Math.round(estimatedMemoryMB)}MB of memory. ` +
`Audio longer than ${Math.round((MAX_RECOMMENDED_MB / sampleRate / channels / 4) / 60)} minutes may cause performance issues.`,
estimatedMemoryMB,
availableMemoryMB,
};
}
if (estimatedMemoryMB > WARN_THRESHOLD_MB) {
const warning = availableMemoryMB
? `This audio will require approximately ${Math.round(estimatedMemoryMB)}MB of memory. ` +
`Your device has ${Math.round(availableMemoryMB)}MB available.`
: `This audio will require approximately ${Math.round(estimatedMemoryMB)}MB of memory.`;
return {
allowed: true,
warning,
estimatedMemoryMB,
availableMemoryMB,
};
}
return {
allowed: true,
estimatedMemoryMB,
availableMemoryMB,
};
}
/**
* Format memory size in human-readable format
* @param bytes Size in bytes
* @returns Formatted string (e.g., "1.5 MB", "250 KB")
*/
export function formatMemorySize(bytes: number): string {
if (bytes < 1024) {
return `${bytes} B`;
} else if (bytes < 1024 * 1024) {
return `${(bytes / 1024).toFixed(1)} KB`;
} else if (bytes < 1024 * 1024 * 1024) {
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
} else {
return `${(bytes / (1024 * 1024 * 1024)).toFixed(2)} GB`;
}
}

93
lib/utils/timeline.ts Normal file
View File

@@ -0,0 +1,93 @@
/**
* Timeline coordinate conversion and formatting utilities
*/
/**
* Base pixels per second at zoom level 1
* zoom=1: 5 pixels per second
* zoom=2: 10 pixels per second, etc.
*/
const PIXELS_PER_SECOND_BASE = 5;
/**
* Convert time (in seconds) to pixel position
*/
export function timeToPixel(time: number, duration: number, zoom: number): number {
if (duration === 0) return 0;
const totalWidth = duration * zoom * PIXELS_PER_SECOND_BASE;
return (time / duration) * totalWidth;
}
/**
* Convert pixel position to time (in seconds)
*/
export function pixelToTime(pixel: number, duration: number, zoom: number): number {
if (duration === 0) return 0;
const totalWidth = duration * zoom * PIXELS_PER_SECOND_BASE;
return (pixel / totalWidth) * duration;
}
/**
* Calculate appropriate tick interval based on visible duration
* Returns interval in seconds
*/
export function calculateTickInterval(visibleDuration: number): {
major: number;
minor: number;
} {
// Very zoomed in: show sub-second intervals
if (visibleDuration < 5) {
return { major: 1, minor: 0.5 };
}
// Zoomed in: show every second
if (visibleDuration < 20) {
return { major: 5, minor: 1 };
}
// Medium zoom: show every 5 seconds
if (visibleDuration < 60) {
return { major: 10, minor: 5 };
}
// Zoomed out: show every 10 seconds
if (visibleDuration < 300) {
return { major: 30, minor: 10 };
}
// Very zoomed out: show every minute
return { major: 60, minor: 30 };
}
/**
* Format time in seconds to display format
* Returns format like "0:00", "1:23", "12:34.5"
*/
export function formatTimeLabel(seconds: number, showMillis: boolean = false): string {
const mins = Math.floor(seconds / 60);
const secs = seconds % 60;
if (showMillis) {
const wholeSecs = Math.floor(secs);
const decimalPart = Math.floor((secs - wholeSecs) * 10);
return `${mins}:${wholeSecs.toString().padStart(2, '0')}.${decimalPart}`;
}
return `${mins}:${Math.floor(secs).toString().padStart(2, '0')}`;
}
/**
* Calculate visible time range based on scroll position
*/
export function getVisibleTimeRange(
scrollLeft: number,
viewportWidth: number,
duration: number,
zoom: number
): { start: number; end: number } {
const totalWidth = duration * zoom * 100;
const start = pixelToTime(scrollLeft, duration, zoom);
const end = pixelToTime(scrollLeft + viewportWidth, duration, zoom);
return {
start: Math.max(0, start),
end: Math.min(duration, end),
};
}

200
lib/workers/audio.worker.ts Normal file
View File

@@ -0,0 +1,200 @@
/**
* Web Worker for heavy audio computations
* Offloads waveform generation, analysis, and normalization to background thread
*/
export interface WorkerMessage {
id: string;
type: 'generatePeaks' | 'generateMinMaxPeaks' | 'normalizePeaks' | 'analyzeAudio' | 'findPeak';
payload: any;
}
export interface WorkerResponse {
id: string;
type: string;
result?: any;
error?: string;
}
// Message handler
self.onmessage = (event: MessageEvent<WorkerMessage>) => {
const { id, type, payload } = event.data;
try {
let result: any;
switch (type) {
case 'generatePeaks':
result = generatePeaks(
payload.channelData,
payload.width
);
break;
case 'generateMinMaxPeaks':
result = generateMinMaxPeaks(
payload.channelData,
payload.width
);
break;
case 'normalizePeaks':
result = normalizePeaks(
payload.peaks,
payload.targetMax
);
break;
case 'analyzeAudio':
result = analyzeAudio(payload.channelData);
break;
case 'findPeak':
result = findPeak(payload.channelData);
break;
default:
throw new Error(`Unknown worker message type: ${type}`);
}
const response: WorkerResponse = { id, type, result };
self.postMessage(response);
} catch (error) {
const response: WorkerResponse = {
id,
type,
error: error instanceof Error ? error.message : String(error),
};
self.postMessage(response);
}
};
/**
* Generate waveform peaks from channel data
*/
function generatePeaks(channelData: Float32Array, width: number): Float32Array {
const peaks = new Float32Array(width);
const samplesPerPeak = Math.floor(channelData.length / width);
for (let i = 0; i < width; i++) {
const start = i * samplesPerPeak;
const end = Math.min(start + samplesPerPeak, channelData.length);
let max = 0;
for (let j = start; j < end; j++) {
const abs = Math.abs(channelData[j]);
if (abs > max) {
max = abs;
}
}
peaks[i] = max;
}
return peaks;
}
/**
* Generate min/max peaks for more detailed waveform visualization
*/
function generateMinMaxPeaks(
channelData: Float32Array,
width: number
): { min: Float32Array; max: Float32Array } {
const min = new Float32Array(width);
const max = new Float32Array(width);
const samplesPerPeak = Math.floor(channelData.length / width);
for (let i = 0; i < width; i++) {
const start = i * samplesPerPeak;
const end = Math.min(start + samplesPerPeak, channelData.length);
let minVal = 1;
let maxVal = -1;
for (let j = start; j < end; j++) {
const val = channelData[j];
if (val < minVal) minVal = val;
if (val > maxVal) maxVal = val;
}
min[i] = minVal;
max[i] = maxVal;
}
return { min, max };
}
/**
* Normalize peaks to a given range
*/
function normalizePeaks(peaks: Float32Array, targetMax: number = 1): Float32Array {
const normalized = new Float32Array(peaks.length);
let max = 0;
// Find max value
for (let i = 0; i < peaks.length; i++) {
if (peaks[i] > max) {
max = peaks[i];
}
}
// Normalize
const scale = max > 0 ? targetMax / max : 1;
for (let i = 0; i < peaks.length; i++) {
normalized[i] = peaks[i] * scale;
}
return normalized;
}
/**
* Analyze audio data for statistics
*/
function analyzeAudio(channelData: Float32Array): {
peak: number;
rms: number;
crestFactor: number;
dynamicRange: number;
} {
let peak = 0;
let sumSquares = 0;
let min = 1;
let max = -1;
for (let i = 0; i < channelData.length; i++) {
const val = channelData[i];
const abs = Math.abs(val);
if (abs > peak) peak = abs;
if (val < min) min = val;
if (val > max) max = val;
sumSquares += val * val;
}
const rms = Math.sqrt(sumSquares / channelData.length);
const crestFactor = rms > 0 ? peak / rms : 0;
const dynamicRange = max - min;
return {
peak,
rms,
crestFactor,
dynamicRange,
};
}
/**
* Find peak value in channel data
*/
function findPeak(channelData: Float32Array): number {
let peak = 0;
for (let i = 0; i < channelData.length; i++) {
const abs = Math.abs(channelData[i]);
if (abs > peak) peak = abs;
}
return peak;
}

Some files were not shown because too many files have changed in this diff Show More