\documentstyle{article} %%%%%%%%%%%%%%%%%%%%%%%%%%%% \input tcilatex \QQQ{Language}{ American English } \begin{document} \section{Comments from Bob} Dear Amir, I am in agreement with what Rick has to say although I am not so pessimistic about the future or the paper. If the paper is rewritten so it provides an even handed discussion of how this algorithm works, it would be acceptable to me. But it would be acceptable to me {\bf only} in that form, unless we can show actual improvement. The problem is if we are as honest as I think we must be, the reviewers may share Rick's opinion and choose not to publish the paper, in which case you have wasted a lot of time (because of us, really because of me) and I would feel very very badly. \section{Specific Comments from Rick:} Page 1: ``Wavelet analysis can be viewed as an alternative to classical {\it % Windowed Fourier Analysis}.'' This is certainly true, but I do not see that it has anything to do with the present paper. Windowed Fourier Analysis is not (to the best of my knowledge) used for channel detection. Page 1: ``This framework provides us with novel robust algorithms that will apply state-of-the-art signal and stochastic processing techniques to some of the cutting edge problems of molecular biology ..." This certainly seems to overstate what is presented in this paper. By implication, it suggests that what follows is likely to be better in some sense than previously used techniques. Page 2: ``Our results demonstrate the feasibility and utility of multiscale analysis.'' In my opinion, the results do NOT demonstrate the utility of multiscale analysis in the sense that nothing presented shows any obvious advantage of this approach. The utility of this approach is assumed, but not clearly demonstrated. I am not saying that multiscale analysis is not useful; I am saying its utility has not been demonstrated in this manuscript. Page 2: ``One advantage of wavelet analysis is that it is surprisingly undemanding of computer resources and so can be implemented in real time.'' This has certainly not been demonstrated in the paper. Clearly from the description of the algorithm, this approach must require much more computer time than is required by simple threshold detection. If we wish to say otherwise, we must support our opinion with results. Page 3: ``We believe that the wavelet transform theory has a good chance to succeed in many applications where the classical Fourier analysis has failed or is not adequate." I do not believe that this has anything to do with this paper. Page 5: ``We examine the performance of our algorithm on models with white noise and with filtered $f^2$ noise." This is the only time $f^2$ noise is mentioned in this paper, none of the results appear to have come from channel currents embedded in $f^2$ noise. Page 5: ``It should be noted that the filtering rounds the corners of the signal, spreading them on about 1.7 sample points." Delete. Pages 5-7: ``Training for Level Estimates -- Preprocessing Step''. I wonder what is gained from this section. There are certainly a variety of ways of accomplishing the level estimation, and it is not demonstrated that this is the best or quickest method. It would not work in the presence of any significant drift and so might have to be supplemented in many real application. Page 8, Figure 1: As I have already pointed out, operationally, all that is being done here is really a high pass filter (differentiation) followed by a series of low pass filters, with the output of each low pass filter being what is used at each level. This should be stated. Page 8: ``We perform several cycles of Gaussian wavelet smoothing by applying the low-pass part of the wavelet transform." This is a needlessly confusing way of stating that the data was simply passed through simple Gaussian low-pass FIR filters. In fact, the three filters used could have been replaced by a single filter (3 Gaussian filters with -3 dB bandwidths of 1, 2, and 4 used in series can be replaced with a single filter with a -3 dB bandwidth of 0.873). Page 8: ``In the future we plan to eliminate the smoothing procedure in the beginning in order to detect short events.'' This should either actually be tried or not it should not be mentioned at all. Since without smoothing the SNR will initially be much worse, eliminating smoothing actually may not work. Page 9: ``The well known Canny edge detector ... " This is not well known to Physiologists. Page 9: What is a Lipschitz exponent? This is not well known to Physiologists. Page 11: ``we detect the modulus maxima by finding the points where the discrete wavelet transform is larger than its two closest neighbor values and strictly larger than at least one of them''. This sounds to me like it will detect {\sf every }noise peak (including very small ones), as well as signal peaks. This would seem time consuming and not very efficient. Perhaps I do not completely understand; operationally, what exactly is being done? Page 11: I do not understand what criterion is used for ``chaining'' points across several scales. $E.g.$, it is stated that ``locations 6000, 6001, 6000, 5999, 5990, 5977 are chained according to the algorithm already described''. I understand that each location is from a different scale, but the locations differ by up to 24 points (6001 vs. 5977). {\sf How is it determined that these all represent the same feature?} Page 12, Table 2: I still do not understand why the $a_i{}^{ref}$ values increase as the level number increases. What sort of normalization is being used? Obviously, since each new level can be obtained simply by low pass filtering the previous level, the amplitude of peaks would normally decrease with additional filtering. Page 12, equation 11: why $is