Java로 마이크 음성데이터를 Read 하기 위해서 인터넷을 조회 해봤다.
http://blog.gtiwari333.com/2011/12/java-sound-capture-from-microphone.html?m=1
이곳에 있는 코드가 가장 적절히 잘 작동하는 코드였는데, 2가지를 글쓴이가 알려주지 않았다.
Audio Format하고 WaveData 그래서 대충 돌려보는 방법을 여기 남기고자 한다.
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.logging.Logger;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.TargetDataLine;
public class MicrophoneRecorder implements Runnable {
public static Logger logger = Logger.getLogger(MicrophoneRecorder.class.getName());
// record microphone && generate stream/byte array
private AudioInputStream audioInputStream;
private AudioFormat format;
public TargetDataLine line;
public Thread thread;
private double duration;
public MicrophoneRecorder(AudioFormat format) {
super();
this.format = format;
}
public void start() {
thread = new Thread(this);
thread.setName("Capture");
thread.start();
}
public void stop() {
thread = null;
}
@Override
public void run() {
duration = 0;
line = getTargetDataLineForRecord();
final ByteArrayOutputStream out = new ByteArrayOutputStream();
final int frameSizeInBytes = format.getFrameSize();
final int bufferLengthInFrames = line.getBufferSize() / 8;
final int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
final byte[] data = new byte[bufferLengthInBytes];
int numBytesRead;
line.start();
logger.info("라인을 시작함");
while (thread != null) {
if ((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
break;
}
// logger.info(new String(data, StandardCharsets.UTF_8));
out.write(data, 0, numBytesRead);
}
logger.info("여기는 오나?");
// we reached the end of the stream. stop and close the line.
line.stop();
line.close();
line = null;
logger.info("라인을 종료함");
// stop and close the output stream
try {
out.flush();
out.close();
} catch (final IOException ex) {
ex.printStackTrace();
}
// load bytes into the audio input stream for playback
final byte audioBytes[] = out.toByteArray();
logger.info("Out Stream을 갖어옮");
logger.info(new String(audioBytes, StandardCharsets.UTF_8));
final ByteArrayInputStream bais = new ByteArrayInputStream(audioBytes);
audioInputStream = new AudioInputStream(bais, format, audioBytes.length / frameSizeInBytes);
logger.info("오디오 인풋 스트림 저장");
final long milliseconds = (long) ((audioInputStream.getFrameLength()* 1000) / format.getFrameRate());
duration = milliseconds / 1000.0;
System.out.println(duration);
try {
audioInputStream.reset();
System.out.println("resetting...");
} catch (final Exception ex) {
ex.printStackTrace();
return;
}
}
private TargetDataLine getTargetDataLineForRecord() {
TargetDataLine line;
final DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
if (!AudioSystem.isLineSupported(info)) {
return null;
}
// get and open the target data line for capture.
try {
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format, line.getBufferSize());
} catch (final Exception ex) {
return null;
}
return line;
}
public AudioInputStream getAudioInputStream() {
return audioInputStream;
}
public AudioFormat getFormat() {
return format;
}
public void setFormat(AudioFormat format) {
this.format = format;
}
public Thread getThread() {
return thread;
}
public double getDuration() {
return duration;
}
}
상기 코드는 앞서 웹사이트의 코드와 동일하다.
해당 코드를 테스트 해보기 위해서 Test Class를 아래와 같이 만들었다.
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import java.io.File;
import static org.junit.Assert.*;
public class MicrophoneRecorderTest {
@Test
public void microphoneTest() throws Exception{
MicrophoneRecorder mr = new MicrophoneRecorder(new AudioFormat(16000,16,1, true, false));
mr.start();
Thread.sleep(5 * 1000);
mr.stop();
Thread.sleep(1000);
// //save
// WaveData wd = new WaveData();
// Thread.sleep(3000);
// wd.saveToFile("~tmp", AudioFileFormat.Type.WAVE, mr.getAudioInputStream());
File file = new File("test.wav");
AudioSystem.write(mr.getAudioInputStream(), AudioFileFormat.Type.WAVE,file);
}
}
AudioFormat은 여러가지로 변경될 수 있겠지만
https://docs.oracle.com/javase/7/docs/api/javax/sound/sampled/AudioFormat.html
JavaDoc에 나온
Fields Modifier and TypeField and Description
protected boolean | bigEndianIndicates whether the audio data is stored in big-endian or little-endian order. |
protected int | channelsThe number of audio channels in this format (1 for mono, 2 for stereo). |
protected AudioFormat.Encoding | encodingThe audio encoding technique used by this format. |
protected float | frameRateThe number of frames played or recorded per second, for sounds that have this format. |
protected int | frameSizeThe number of bytes in each frame of a sound that has this format. |
protected float | sampleRateThe number of samples played or recorded per second, for sounds that have this format. |
protected int | sampleSizeInBitsThe number of bits in each sample of a sound that has this format. |
이 내용과 다음 링크에 있는
https://developer.android.com/reference/android/media/AudioFormat
- ENCODING_PCM_16BIT: The audio sample is a 16 bit signed integer typically stored as a Java short in a short array, but when the short is stored in a ByteBuffer, it is native endian (as compared to the default Java big endian). The short has full range from [-32768, 32767], and is sometimes interpreted as fixed point Q.15 data.
이 내용을 기반으로 썻다.
new AudioFormat(16000,16,1, true, false)
Sample Rate 16000
16비트 mono, signed, native edian (little ednian)
여기서 Thread를 2번 썻는데
첫번째는 5초간 마이크에서 목소리를 저장하는 시간이고
나머지 1초는 해당 Stream을 audioInputStream
에 넣는데 사용되는 시간이다.
목소리는 test.wav에 잘 저장되는것을 확인했다.
간단히 local test 하려면 이정도면 충분 한것 같다.
'JAVA' 카테고리의 다른 글
Java의 동기화 및 Locking 기법 들 (Synchronized, ReentrantLock, Semaphore, Atomic Package, varHandle) (0) | 2021.05.31 |
---|---|
javaFX java version 11이상에서 실행하기 (2) | 2021.05.28 |
Thread Demon & Join (0) | 2021.05.25 |
Thread 기본 코드 (0) | 2021.05.24 |
Thread 관련 (0) | 2021.05.24 |