top of page

2021 AFC General Discussion

Public·1 member

Audio Comparer V1.7



One must be very comfortable perambulating interior space to be an audio wire designer. You spend your time juggling a limited number of variables in quest of the happy permutation that will result in a leg up over the competition. And contemplation only takes you so far. Prototypes must be built. Odds are high these will wind up in the pile over in that dark corner.




Audio Comparer v1.7



My secret weapon is my wife, Lynn, who provides invaluable musical insight. With decades of training as an oboist, and past President of a community orchestra, she knows what sounds natural and what doesn't. Many are the times she's pulled me back from the brink after a multi-day audio binge.


To first assess the effect of repeated exposure to a visual stimulus over the course of conditioning, we examined population responses to Vc, which was never paired with an auditory cue or reinforced, and found a general decrease in responsiveness across days (Extended Data Fig. 1b). To test whether experience with audio-visual sequential pairings affected whether V1 responded differently to a visual stimulus, we first compared the average population responses to the auditory cue and visual stimulus pair that was followed by a reward (AaVa) to that of the same visual stimulus (Va) presented alone. We found that, on day 1 of conditioning, the two visual responses were similar (Fig. 1c). Analogous to Vc, over the course of conditioning, the visual responses to both AaVa and Va decreased (Extended Data Fig. 1c). Interestingly, however, we found that the auditory cue preceding the paired visual stimulus resulted in an additional suppression of the visual response that increased with experience (Fig. 1c,d and Extended Data Fig. 1c). Furthermore, this suppression was most prominent for the auditory and visual stimuli followed by a water reward. For the audio-visual stimuli followed by an air puff (AbVb), we also observed a suppression of the visual response after the auditory cue; however, this suppression developed already on day 1 and was weaker and more variable than in the rewarded condition (Extended Data Fig. 1d,f). Additionally, the auditory cue itself resulted in a slight increase in V1 activity initially and a slight decrease in activity later in conditioning (Extended Data Fig. 1e). In mice that underwent the same pairing paradigm without any reinforcements, visual responses were smaller on average (Extended Data Fig. 1g), and the auditory cue did not result in a consistent suppression of the visual response (Extended Data Fig. 1g,i). Similar to reinforced conditioning, the auditory cue itself initially resulted in a slight increase in activity, but, unlike reinforced conditioning, this response did not change over time (Extended Data Fig. 1h). To investigate the mechanism of auditory-cue-driven suppression of visual responses, we focused subsequent analyses on the stimuli that were reinforced with a water reward. In addition to the experience-dependent auditory-cue-driven suppression, we also found that the visual responses to AaVa and Va de-correlated with experience (Extended Data Fig. 2a). Thus, experience with sequential audio-visual pairings can change the way V1 represents visual stimuli depending on the behavioral relevance of the stimuli.


Recording the activity of AuC axons in V1, we found that, early in conditioning, these carried both an auditory response and a visual response (Fig. 2c). Interestingly, the visual responses were larger than the auditory responses and, differently from responses in V1, increased slightly over the course of conditioning (Fig. 2c and Extended Data Fig. 4c,d). Conversely, the auditory responses in AuC axons, like the visual responses in V1, decreased across conditioning days (Fig. 2c and Extended Data Fig. 4e). Intrigued by the strength of the visual responses, we mapped the responses as a function of retinotopic location of the visual stimulus and found that they had receptive fields that matched the retinotopic location of the recording location in V1 (Fig. 2d, top). This is consistent with the interpretation that the responses after the visual stimulus onset in the paired presentation, AaVa, are likely visually driven and not delayed auditory responses or anticipatory motor responses. These visual responses were absent in anesthetized recordings (Fig. 2d, bottom), suggesting that the visual responses might arise from cortico-cortical top-down-like connections25,26. Given that visual cortex also projects to AuC16,20, it is possible that the source of the visual responses in AuC axons is inherited from retinotopically matched V1 neurons. To test this, we examined AuC axon responses while silencing activity in V1 locally. We used a mouse line expressing Cre in parvalbumin (PV)-positive interneurons27 and injected an AVV vector to express a Cre-dependent channelrhodopsin variant in V1 (AAV2/1-EF1α-DIO-ChrimsonR-tdTomato). We then quantified the effect of locally silencing V1 using optogenetic activation of PV interneurons while imaging the calcium responses in AuC axons (Methods). Surprisingly, we found that the inhibition of V1 activity was effective in suppressing auditory-evoked responses in the AuC axons but resulted in no suppression of visual responses before conditioning and only a small reduction after conditioning (Fig. 2e,f). The responsiveness of AuC projection axons to visual stimuli is consistent with previous work in awake mice showing that visually responsive neurons in AuC are predominantly found in layers 5 and 6 (ref. 28), which send collaterals to cortical targets, including V1 (ref. 9). However, the role of visual responses in AuC remains elusive. Our results show that AuC conveys a retinotopically matched visual signal to V1 largely independent of V1 activity. Such a signal could potentially function to inhibit the auditory-cued visual response in visual cortex. For AuC input to contribute to the experience-dependent suppression of auditory-cued visual responses, we would expect an experience-dependent change in the AuC axon responses over the course of conditioning. Congruently, we found that there was a decrease of similarity between axon visual responses to AaVa and Va between day 1 and day 4 of conditioning (Fig. 2g). In addition, we found that the fraction of visually responsive axons was greater when the visual stimulus followed the auditory cue (AaVa) than when presented alone (Va) (Fig. 2h). This result prompted us to examine differences in visual responsivity of AuC axons when mice were tasked with learning audio-visual associations compared to when they were similarly exposed only to visual stimuli. We, therefore, exposed the mice in our audio-visual conditioning context to a second context, over the same time course of conditioning, in which only visual stimuli were presented (Methods). We found that, although the overall fraction of visually responsive axons increased from day 1 to day 4 of conditioning in the audio-visual context (Fig. 2i, left), there was no change in the fraction of visually responsive axons from day 1 to day 4 in the visual-only context (Fig. 2i, right). Thus, AuC input to V1 exhibits an experience-dependent modulation of the visual response by the auditory cue.


(a) Our results demonstrate that with experience, the top-down input from AuC to V1 rearranges to target the layer 2/3 neurons in V1 responsive to Va for suppression. This is consistent with a cross-modal suppression of predictable bottom-up input in V1. (b) Given that the interaction between AuC and V1 is not hierarchical, our results suggest that predictive processing can be expanded to non-hierarchical interactions in cortex. This could be achieved, for example, as follows: V1 and AuC mutually exchange predictions through top-down like projections and in return receive prediction errors through bottom-up like projections. See also38 for an extended discussion of non-hierarchical predictive processing. (c) More specifically, the cortical circuit for predictive processing38 can be directly expanded to lateral interactions between AuC and V1 as described in the following. Please note, this is an attempt at integrating our results with previous work on cortical circuits for predictive processing, and not meant as a summary of our results. For simplicity, only the exchange of predictive top-down like signals is shown. Bottom-up visual input is compared to top-down predictions of visual input from AuC in prediction error neurons in V1. Our results are consistent with the responses of such prediction error neurons in layer 2/3. The model postulates that audio-visual integration then occurs by virtue of internal representation neurons integrating over these prediction error responses. Identifying internal representation neurons will be key to further validating this model and will likely hinge on having genetic access to the functionally identified prediction error neurons we describe here.


@WyzeChaoHere is the V2 & V3 12 feet up the tree side by side audio comparison, unfortunately they were done at separate times due to WiFi distance to Mesh WAP and V3 beat the V2 in the transmission of data.


Had to unplug the V3 for V2 to get audio to cloud, and for some odd reason the V2 audio was not captured on SD card at that exact moment when both were on and sound was enabled.(must of been a firmware bug - like that ever happens)


The V3 Wyze Cam Audio definitely could use some improvement but your hardware may be the limiting factor here as maybe the vendors that make the cams you sell are 2 different Chinese manufacturers? - Or some one has chosen a cheaper microphone and audio I.C. chip / audio processor?


And to be fair to WYZE the V2 is not IP-65 rated and the microphone hole on a V3 which is IP-65 rated has a protective layer to keep out the moisture which could affect the audio tooV2-V3 mic IP-6817023022 182 KBAgain the Forum Rotated my picture automatically - to better view click on it then enlarge and download it, Then Rotate for best view.


About

Welcome to the group! You can connect with other members, ge...
bottom of page