Quantcast

Re: Gervill 0.2

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Gervill 0.2

Florian Bomers-3
Hi Alex,

good points. some notes:

> To make new functionality available we should add new interface to
> javax.sound.midi (Synthesizer2 or something like this) which will
> extends standard Synthesizer so developers could use new functionality
> via _standard_ interface.

yes, I propose "SoftwareSynthesizer" as presented in my previous
email in this thread.

> As a 1st step I suggest to create such interface in com.sun.media.sound
> package, and them move it to javax.sound.midi.

I don't see an advantage for going in 2 steps, I'd prefer to put
it in javax.sound.midi directly.

> So we need to determine which methods should be included in the extended
> interface.
> As for me the main method is setMixer(javax.sound.sampled.Mixer) (I'm
> not sure about SourceDataLine - other implementations can require
> several lines).

yes, see my proposal for that.

> Ability to specify preferred AudioFormat, Latency & Polyphony is a good
> feature, but most likely it should be optional.

I'd suggest to add "properties" for advanced functionality,
similar to AudioFormat's properties, except that they can be set
after instanciation.

> BTW do you think selecting from several resamplers is useful feature?

yes, very useful. E.g. for realtime playback, a linear
interpolator will be enough (for the sake of compatiblity, CPU
usage, etc.). But for a software that renders MIDI files to disk
(e.g. by using the setOutputStream() method of my proposal),
quality matters more than realtime performance.

> If somebody wants to capture synthesizer outputs, he will implement
> simplest Mixer and set it as output mixer for the synthesizer.

I think a way of directly grabbing the synth output by way of
OutputStream (or AudioInputStream, maybe better) will be more
versatile and more Java like.

> 3. Why the synth support only single receiver? (look at
> com.sun.media.sound.AbstractMidiDevice for example of multi-recevers
> implementation).

yes, should support multiple receivers.

Florian


> 4. WaveFloatFileReader: it seems to me that the class is not used anywhere.
>
> 5. LargeSoundbankReader: what use cases you see for it? (as far as I see
> from the code, currently its feature is not used anywhere)
>
> Regards
> Alex
>
>
> Karl Helgason wrote:
>> Hi,
>>
>> I have updated the midi synthesizer:
>> http://sourceforge.net/project/showfiles.php?group_id=175084&package_id=246500
>>
>> - Large format support.
>> - Fix SoundFont modulator mapping.
>> - Fix handling of unsigned 16 bit streams
>> - Improved GUS patch support
>> - Add support for ping-pong/bi-directional and reverse loops
>>
>> regards
>> Karl Helgason
>> _______________________________________________
>> audio-engine-dev mailing list
>> [hidden email]
>> http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
> _______________________________________________
> audio-engine-dev mailing list
> [hidden email]
> http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
>
>

--
Florian Bomers
Bome Software

-------------------------------------------------------
Music Software, Development Tools:  http://www.bome.com
Java Sound extensions, plugins: http://www.tritonus.org
The Java Sound Resources:    http://www.jsresources.org
-------------------------------------------------------
Please quote this email in your reply. Thanks!

_______________________________________________
audio-engine-dev mailing list
[hidden email]
http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Gervill 0.2

Alex Menkov
Florian,

see my notes inline

Florian Bomers wrote:
>> As a 1st step I suggest to create such interface in com.sun.media.sound
>> package, and them move it to javax.sound.midi.
>
> I don't see an advantage for going in 2 steps, I'd prefer to put
> it in javax.sound.midi directly.

As you know I have to get CCC approval for API changes and getting of
the approval may take a long time (and the approval is not required to
make any changes in internal com.sun.media.sound package). So the
interface may be created in com.sun.media.sound and when approved just
be moved into javax.media.sound.midi

But may be we don't need the intermediate stage due SoftSynthesizer
project will survive as separate part before integration into JDK, so we
will have enough time to get all approvals.

>> So we need to determine which methods should be included in the extended
>> interface.
>> As for me the main method is setMixer(javax.sound.sampled.Mixer) (I'm
>> not sure about SourceDataLine - other implementations can require
>> several lines).
>
> yes, see my proposal for that.
>
>> Ability to specify preferred AudioFormat, Latency & Polyphony is a good
>> feature, but most likely it should be optional.
>
> I'd suggest to add "properties" for advanced functionality,
> similar to AudioFormat's properties, except that they can be set
> after instanciation.

I thought about something like this, the main difficulty I see is how to
make restrictions (available values) for the specific properties like
"resamplerType" (the interface will be public, so other interface
implementations should have ability to add its own supported
"resamplerType" values as well as its own properties).
Do you have an idea how it can look?

>> BTW do you think selecting from several resamplers is useful feature?
>
> yes, very useful. E.g. for realtime playback, a linear
> interpolator will be enough (for the sake of compatiblity, CPU
> usage, etc.). But for a software that renders MIDI files to disk
> (e.g. by using the setOutputStream() method of my proposal),
> quality matters more than realtime performance.

Okay, I've got it.
Just one note - if some resampler is not fast enough for realtime
playback, it most likely produce bad results for software rendering
(sequencer anyway works in realtime and will not wait while synthesizer
complete processing of previous messages before sending new one).

>
>> If somebody wants to capture synthesizer outputs, he will implement
>> simplest Mixer and set it as output mixer for the synthesizer.
>
> I think a way of directly grabbing the synth output by way of
> OutputStream (or AudioInputStream, maybe better) will be more
> versatile and more Java like.

yes, agreed.
With such approach we also haven't to implement fully spec-compliant Mixer.

Regards
Alex

>
>> 3. Why the synth support only single receiver? (look at
>> com.sun.media.sound.AbstractMidiDevice for example of multi-recevers
>> implementation).
>
> yes, should support multiple receivers.
>
> Florian
>
>
>> 4. WaveFloatFileReader: it seems to me that the class is not used anywhere.
>>
>> 5. LargeSoundbankReader: what use cases you see for it? (as far as I see
>> from the code, currently its feature is not used anywhere)
>>
>> Regards
>> Alex
>>
>>
>> Karl Helgason wrote:
>>> Hi,
>>>
>>> I have updated the midi synthesizer:
>>> http://sourceforge.net/project/showfiles.php?group_id=175084&package_id=246500
>>>
>>> - Large format support.
>>> - Fix SoundFont modulator mapping.
>>> - Fix handling of unsigned 16 bit streams
>>> - Improved GUS patch support
>>> - Add support for ping-pong/bi-directional and reverse loops
>>>
>>> regards
>>> Karl Helgason
>>> _______________________________________________
>>> audio-engine-dev mailing list
>>> [hidden email]
>>> http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
>> _______________________________________________
>> audio-engine-dev mailing list
>> [hidden email]
>> http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
>>
>>
>
_______________________________________________
audio-engine-dev mailing list
[hidden email]
http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Gervill 0.2

Karl Helgason-2
Hi Alex,

Here are my answer to your notes.

3. (multi-recevers implementation).
You are correct about his one, I'll fix it.

4.
WaveFloatFileReader is used to provide support
for Wave file in FLOAT format. It was supposed to be
register as AudioFileReader in /META-INF/services
(Which i forgot to do in current release of Gervill).

5.
LargeSoundbankReader is used to provide
support for loading SoundBanks in large mode
e.g. where sample data is not loaded into memory.

I use it to load large piano soundfont (about 500mb)
and Sonivox 250 Meg GM Wavetable soundfont
(from http://www.sonivoxmi.com).

You can see similar idea in javax.swing.JTree
where they have the method: setLargeModel(boolean newValue).

This is probably not the best way to do it.
Ideally we would have
  getSoundbank(File file, boolean largeModel)
In javax.sound.midi.MidiSystem.

Having LargeModel support is also useful when we
want to use big audio files as soundbank.
For example If we have any pre-recorded audio file
which we want to play along some midi sequence file.

2.
I don't think providing simple Mixer is good enough.
The reason for the Mixer interface was to provide
support for reading audio from synthesizer in pull mode
(where user pulls data from the synthesizer
instead of it being pushed thru Mixer/SourceDataLine).
In example below I suggest we add this method instead:

  public AudioInputStream openStream(AudioFormat targetFormat)

Which is much simpler and more intuitive way than using Mixer interface.
And it can also be writed directly to Wave file using:
 AudioSystem.write(AudioInputStream stream, AudioFileFormat.Type fileType, File out)

1.
I propose a AudioSynthesizer interface.
A synthesizer that push audio into SourceDataLine
or provides audio where user pulls it from AudioInputStream.

When the synthesizer is in push mode, a special
audio feeder thread is created which push data into
specified SourceDataLine or system default Mixer.
The reason I choose to use SourceDataLine is because
it includes information about audio format and buffer size to use.
Therefore user don't have to specify format and latency
(already provided with line.getFormat() and line.getBufferSize() methods.)

No audio feeder thread is needed when synthesizer
is in pull mode. This is suitable when
rendering audio in non-real time mode.
Sequencer object can't used with synthesizer in this mode.

I also suggest to add "properties" for advanced functionality.
Similar like we do in java.sql.Driver, where we have getPropertyInfo
method which provide information about possible properties.
With that method we can make restriction to properties like "resamplerType".


Here is a brief example how the synthesizer is used in pull mode:
-----------------------------------------------------------------

/*
 * Open synthesizer in pull mode in the format 96000hz 24 bit stereo
 * using Sinc interpolation for highest quality.
 * With 1024 in max polyphony.
 */
AudioFormat format = new AudioFormat(96000, 24, 2, true, false);
AudioSynthesizer synthesizer = MidiSyster.getSynthesizer();
Map<String,Object> info = new HashMap<String,Object>();
info.put("resampletType", "sinc");
info.put("maxPolyphony", "1024");
AudioInputStream stream = synthesizer.openStream(format, info);

/*
 * Play midi note 60 on channel 1 for 1 sec.
 */
Receiver recv = synthesize.getReceiver();
msg.setMessage(msg.NOTE_ON, 0, 60, 80);
recv.send(msg, 0);
msg.setMessage(msg.NOTE_ON, 0, 60, 80);
recv.send(msg, 1000000);

/*
 * Calculate how many bytes 10 seconds are.
 */
long len = (long)(format.getFrameSize() * format.getFrameRate() * 10);

/*
 * Write 10 second into output file.
 */
stream = new AudioInputStream(stream, format, len);
AudioSystem.write(stream, AudioFileFormat.Type.WAVE, new File("output.wav"));

/*
 * Close all resources.
 */
recv.close();
stream.close();
synthesizer.close();



------------------------------------------------------------

classe SynthesizerPropertyInfo {

  /*
   * Array of possible values for the field
   * SynthesizerPropertyInfo.value.
   *
   */
  Object[] choices;

  /*
   * A brief description of the property, which may be null.
   *
   */
  String description;

  /*
   * The name of the property.
   *
   */
  String name;

  /*
   * Default value used by synthesizer if not specified.
   *
   */
  Object value;

  /*
   *  Obtains the class used in the value field.
   *
   */
  Class valueClass;

}


interface AudioSynthesizer extends Synthesizer {

  /*
   * Gets information about the possible
   * properties for this synthesizer.
   *
   */
  SynthesizerPropertyInfo[] getPropertyInfo();

  /*
   * Open device in push mode (e.g. AudioSynthesize
   * is responsible to render and write it's data
   * to SourceDataLine).
   * If line is not specified then system default mixer and line
   * is used.
   *
   * Additional parameters can be set through info parameter.
   * To query what parameter are available use getPropertyInfo method.
   *
   */
  public void open(SourceDataLine line, Map<String,Object> info);

  /*
   * Open device in push mode (e.g. AudioSynthesize
   * is responsible to render and write it's data
   * to SourceDataLine).
   * If line is not specified then system default mixer and line
   * is used.
   *
   */
  public void open(SourceDataLine line);

  /*
   * Opens device in pull mode, e.g. audio data is rendered
   * user calls read in AudioInputStream.
   * This is suitable when user wants to render midi data
   * non-realtime.
   *
   * Additional parameters can be set through info parameter.
   * To query what parameter are available use getPropertyInfo method.
   *
   */
  public AudioInputStream openStream(AudioFormat targetFormat, Map<String,Object> info);

  /*
   * Opens device in pull mode, e.g. audio data is rendered
   * user calls read in AudioInputStream.
   * This is suitable when user wants to render midi data
   * in non-real time.
   *
   */
  public AudioInputStream openStream(AudioFormat targetFormat) ;

}
_______________________________________________
audio-engine-dev mailing list
[hidden email]
http://mail.openjdk.java.net/mailman/listinfo/audio-engine-dev
Loading...