All Videos Tagged visualization (MedTech I.Q.) - MedTech I.Q.2024-03-29T10:38:10Zhttps://medtechiq.ning.com/video/video/listTagged?tag=visualization&rss=yes&xn_auth=noSimulation: Project LifeLike - Computer Interfaces that Learntag:medtechiq.ning.com,2009-01-25:2140535:Video:105252009-01-25T14:08:53.093ZCC-Conrad Clyburn-MedForeSighthttps://medtechiq.ning.com/profile/CCatMedTechIQ
<a href="https://medtechiq.ning.com/video/simulation-project-lifelike"><br />
<img alt="Thumbnail" height="90" src="https://storage.ning.com/topology/rest/1.0/file/get/2508868594?profile=original&width=120&height=90" width="120"></img><br />
</a> <br></br>This collaborative research project investigates, develops and evaluates lifelike, natural computer interfaces as portals to intelligent programs in the context of Decision Support System (DSS).<br></br>
<br></br>
The goal of this effort is to provide a natural interface that supports realistic spoken dialog and non-verbal cues that is capable of learning to maintain its knowledge current…
<a href="https://medtechiq.ning.com/video/simulation-project-lifelike"><br />
<img src="https://storage.ning.com/topology/rest/1.0/file/get/2508868594?profile=original&width=120&height=90" width="120" height="90" alt="Thumbnail" /><br />
</a><br />This collaborative research project investigates, develops and evaluates lifelike, natural computer interfaces as portals to intelligent programs in the context of Decision Support System (DSS).<br />
<br />
The goal of this effort is to provide a natural interface that supports realistic spoken dialog and non-verbal cues that is capable of learning to maintain its knowledge current and correct. Research objectives focus around the development of an avatar-based interface with which the DSS user can interact.<br />
<br />
Communication with the avatar will occur in spoken natural language combined with gestural expressions or pointing on the screen.<br />
<br />
This project extends a current National Science Foundation-sponsored DSS-based project on information gathered about Dr. Alex Schwarzkopf of the NSF Industry/University Cooperative Research Centers (I/UCRC) Program.<br />
<br />
This project is a collaboration between the Intelligent Systems Laboratory (ISL) at the University of Central Florida (UCF) and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC).<br />
<br />
The EVL team's focus is on avatar development encompassing Visualization and Interaction with Realistic Avatars and Evaluation of System Naturalness and Usability. The ISL team's focus is on Natural Language Recognition and on Automated Knowledge Update and refinement.<br />
<br />
More information can be found on EVL's website - <a href="http://www.evl.uic.edu/core.php?mod=4">http://www.evl.uic.edu/core.php?mod=4</a>... Simulation: Dr Adrian Park Describes How Advanced Visualization Can Enhance Surgerytag:medtechiq.ning.com,2008-10-13:2140535:Video:47432008-10-13T21:36:24.400ZCC-Conrad Clyburn-MedForeSighthttps://medtechiq.ning.com/profile/CCatMedTechIQ
<a href="https://medtechiq.ning.com/video/2140535:Video:4743"><br />
<img alt="Thumbnail" height="97" src="https://storage.ning.com/topology/rest/1.0/file/get/2508868030?profile=original&width=130&height=97" width="130"></img><br />
</a> <br></br>When you have surgery, you want your doctor to have the best tools available. Through a major grant from the U.S. Army, thanks in large part to U.S. Senator Mitch McConnell, the University of Kentucky Center for Visualization and Virtual Environments is developing groundbreaking technology to improve minimally invasive surgery. NOTE: contains images from an actual surgical…
<a href="https://medtechiq.ning.com/video/2140535:Video:4743"><br />
<img src="https://storage.ning.com/topology/rest/1.0/file/get/2508868030?profile=original&width=130&height=97" width="130" height="97" alt="Thumbnail" /><br />
</a><br />When you have surgery, you want your doctor to have the best tools available. Through a major grant from the U.S. Army, thanks in large part to U.S. Senator Mitch McConnell, the University of Kentucky Center for Visualization and Virtual Environments is developing groundbreaking technology to improve minimally invasive surgery. NOTE: contains images from an actual surgical procedure. Simulation: Forsslund Systems Prototype Dental Simulatortag:medtechiq.ning.com,2008-10-13:2140535:Video:47292008-10-13T21:06:47.665ZCC-Conrad Clyburn-MedForeSighthttps://medtechiq.ning.com/profile/CCatMedTechIQ
<a href="https://medtechiq.ning.com/video/2140535:Video:4729"><br />
<img alt="Thumbnail" height="96" src="https://storage.ning.com/topology/rest/1.0/file/get/2508866140?profile=original&width=128&height=96" width="128"></img><br />
</a> <br></br>A prototype of a Simulator for Surgical Extraction of Wisdom Teeth have been developed by Forsslund Systems, based on SenseGraphics H3D API. <a href="http://www.forsslundsystem.se">www.forsslundsystem.se…</a>
<a href="https://medtechiq.ning.com/video/2140535:Video:4729"><br />
<img src="https://storage.ning.com/topology/rest/1.0/file/get/2508866140?profile=original&width=128&height=96" width="128" height="96" alt="Thumbnail" /><br />
</a><br />A prototype of a Simulator for Surgical Extraction of Wisdom Teeth have been developed by Forsslund Systems, based on SenseGraphics H3D API. <a href="http://www.forsslundsystem.se">www.forsslundsystem.se</a> <a href="http://www.h3d.org">www.h3d.org</a> Imaging: Large image databases and small codes for object recognitiontag:medtechiq.ning.com,2008-10-11:2140535:Video:46542008-10-11T19:29:48.331ZCC-Conrad Clyburn-MedForeSighthttps://medtechiq.ning.com/profile/CCatMedTechIQ
<a href="https://medtechiq.ning.com/video/2140535:Video:4654"><br />
<img alt="Thumbnail" height="97" src="https://storage.ning.com/topology/rest/1.0/file/get/2508864668?profile=original&width=130&height=97" width="130"></img><br />
</a> <br></br>In this Google Tech Talk, Speaker Dr Rob Fergus, Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences, New York University describes how with the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, he explores this world with the aid of a…
<a href="https://medtechiq.ning.com/video/2140535:Video:4654"><br />
<img src="https://storage.ning.com/topology/rest/1.0/file/get/2508864668?profile=original&width=130&height=97" width="130" height="97" alt="Thumbnail" /><br />
</a><br />In this Google Tech Talk, Speaker Dr Rob Fergus, Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences, New York University describes how with the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, he explores this world with the aid of a large dataset of 79,302,017 images collected from the Web. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32x32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest?neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class?specific Viola?Jones style detectors.<br />
<br />
In the second part of the talk, he presents efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. His approach uses the Semantic Hashing idea of Salakhutdinov and Hinton, based on Restricted Boltzmann Machines to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using this scheme, it is possible to perform real-time searches on the Internet image database using a single large PC and obtain recognition results comparable to the full descriptor. Using the codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques. SCI Institute - Multidisciplinary Research - University of Utahtag:medtechiq.ning.com,2008-10-11:2140535:Video:46052008-10-11T13:57:12.060ZCC-Conrad Clyburn-MedForeSighthttps://medtechiq.ning.com/profile/CCatMedTechIQ
<a href="https://medtechiq.ning.com/video/2140535:Video:4605"><br />
<img src="https://storage.ning.com/topology/rest/1.0/file/get/2508866292?profile=original&width=128&height=96" width="128" height="96" alt="Thumbnail" /><br />
</a><br />See Dr Chris Johnson and Scientific Computing and Imaging (SCI) Institute work at the University of Utah. Visualization, scientific computing, and image analysis. Biomedical Computing. Fire and Explosion Simulation, Computational Biology.
<a href="https://medtechiq.ning.com/video/2140535:Video:4605"><br />
<img src="https://storage.ning.com/topology/rest/1.0/file/get/2508866292?profile=original&width=128&height=96" width="128" height="96" alt="Thumbnail" /><br />
</a><br />See Dr Chris Johnson and Scientific Computing and Imaging (SCI) Institute work at the University of Utah. Visualization, scientific computing, and image analysis. Biomedical Computing. Fire and Explosion Simulation, Computational Biology.