transferlearning.app
#Transfer Learning | Applications
#Tensor Flow | Solutions to accelerate machine learning tasks | Prepare data | Build Machine Learning ML models | Deploy models | Run models in production and keep them performing | Multidimensional array based numeric computation (similar to NumPy) | GPU and distributed processing | Automatic differentiation | Model construction, training, and export
#Intel | AI Kit | Processors
#Udacity | Simulator
#University of Texas at Austin | Transfer Learning for Reinforcement Learning on a Physical Robot
#Google Research | Visual Transfer Learning for Robotic Manipulation
#DSpace@MIT | Visual Transfer Learning for Robotic Manipulation
#Toyota Research Institute | Robot learning techniques, coupled with diffusion models | Developing systems that can help older people continue to live independently | Robots that can learn and adapt to new tasks | Teaching systems through teleoperation | Remotely driving robot through demonstrations | Teleop device transmitting force between robot and person | Sight and force feedback to produce fuller picture of task | Force feedback | Flipping pancakes | Representing a robot visuomotor policy as conditional denoising diffusion process | Centrally accessible cloud-based system | Creating Large Behavior Models
#AnyMAL | Robot | Navigating as a wheeled quadruped | Standing upright on its hind legs, utilizing its front wheels as makeshift hands | Trained to perform practical tasks | Multimodal platform designed for last-mile delivery and logistics | GPS, LiDAR, and cameras for independent navigation | Reinforcement learning known as curiosity-driven learning | High-level sparse rewards | Independently discerning how to complete the entire task from the beginning | Learning process finely attuned to slight alterations in training environment | Potential for innovative task completion in intricate and dynamic scenarios
#Sanctuary AI | Pheonix humanoid robot | General-purpose humanoid robot | Form factor similar to an average-sized human | Carbon AI control system | Human-like intelligence | Robotic hand | Hand-eye coordination of object manipulation tasks | Haptic technology that mimics the sense of touch | AI model training | General-purpose robotics
#SEA.AI App | Detecting floating objects early | Using thermal and optical cameras to catch unsignalled craft, floating obstacles, containers, buoys, inflatables, kayaks and persons over board
#Anybotics | Workforce App | Operate ANYmal robot from device | Set up and review robot missions | Industrial Inspection
#OpenAI | Developers access to Stack Overflow technical knowledge about coding
#IDS | Industrial image processing | 3D cameras | Digital twins can distinguish color | Higher reproducible Z-accuracy | Stereo cameras: 240 mm, 455 mm | RGB sensor | Distinguishing colored objects | Improved pattern contrast on objects at long distances | Z accuracy: 0.1 mm at 1 m object distance | SDK | AI based image processing web service | AI based image analysis
#Flyability | Drones for industrial inspection and analysis | Confined space inspection | Collision and crash resistant inspection drone | 3D mapping | Volumetric measurement | Inspections of cargo ships, bridges, ports, steel mills cement factories, liquid gas tanks, nuclear facilities, city wide underground sewage systems | Ouster lidar
#Google DeepMind Technologies Limited | Creating advanced AI models and applications | Artificial intelligence systems ALOHA Unleashed and DemoStart | Helping robots perform complex tasks that require dexterous movement | Two-armed manipulation tasks | Simulations to improve real-world performance on multi-fingered robotic hand | Helping robots learn from human demonstrations | Translating images to action | High level of dexterity in bi-arm manipulation | Robot has two hands that can be teleoperated for training and data-collection | Allowing robots to learn how to perform new tasks with fewer demonstrations | Collectung demonstration data by remotely operating robot behavior | Applying diffusion method | Predicting robot actions from random noise | Helpung robot learn from data | Collaborating with DemoStart | DemoStart is helping new robots acquire dexterous behaviors in simulation | Google collaborating with Shadow Robot
#Shadow Robot Company | Humanoid robotic hand | Mimics human hand functionality and dimensions | Featuring 24 joints and 20 degrees of freedom
#Tampere University | Pneumatic touchpad | Soft touchpad sensing force, area and location of contact without electricity | Device utilises pneumatic channels | Can be used in environments such as MRI machines | Soft robots | Rehabilitation aids | Touchpad does not need electricity | It uses pneumatic channels embedded in the device for detection | Made entirely of soft silicone | 32 channels that adapt to touch | Precise enough to recognise handwritten letters | Recognizes multiple simultaneous touches | Ideal for use in devices such as MRI machines | If cancer tumours are found during MRI scan, pneumatic robot can take biopsy while patient is being scanned | Pneumatic device can be used in strong radiation or conditions where even small spark of electricity would cause serious hazard
#Neptune Labs | neptune.ai | Tracking foundation model training | Model training | Reproducing experiments | Rolling back to the last working stage of model | Transferring models across domains and teams | Monitoring parallel training jobs | Tracking jobs operating on different compute clusters | Rapidly identifying and resolving model training issues | Workflow set up to handle the most common model training scenarios | Tool to organize deep learning experiments
#UCLA | AI model analyzing medical images of diseases | Deep-learning framework | SLice Integration by Vision Transformer (SLIViT) | Analyzing retinal scan, ultrasound video, CT, MRI | Identifying potential disease-risk biomarkers | Using novel pre-training and fine-tuning method | Relying on large, accessible public data sets | NVIDIA T4 GPUs, NVIDIA V100 Tensor Core GPUs, NVIDIA CUDA used to conduct research | SLIViT makes large-scale, accurate analysis realistic | Disease biomarkers help understand disease trajectory of patients | Tailoring treatment to patients based on biomarkers found through SLIVIT | Model largely pre-trained on datasets of 2D scans | Fine-tuning model on 3D scans | Transfer learned model to identify different disease biomarkers by fine-tuning on datasets consisting of imagery from very different modalities and organs | Trained on 2D retinal scans and then fine-tuned model on MRI of liver | Helping model with downstream learnings even though different imagery domains
#Linux Foundation | LF AI & Data | Fostering open source innovation in artificial intelligence and data | Open Platform for Enterprise AI (OPEA) | Creating flexible, scalable Generative AI systems | Promoting sustainable ecosystem for open source AI solutions | Simplifying the deployment of generative AI (GenAI) systems | Standardization of Retrieval-Augmented Generation (RAG) | Supporting Linux development and open-source software projects | Linux kernel | Linus Torvalds
#UC Berkeley, CA, USA | Professor Trevor Darrell | Advancing machine intelligence | Methods for training vision models | Enabling robots to determine appropriate actions in novel situations | Approaches to make VLMs smaller and more efficient while retaining accuracy | How LLMs can be used as visual reasoning coordinators, overseeing the use of multiple task-specific models | Utilizing visual intelligence at home while preserving privacy | Focused on advancements in object detection, semantic segmentation and feature extraction techniques | Researched advanced unsupervised learning techniques and adaptive models | Researched cross-modal methods that integrate various data types | Advised SafelyYou, Nexar, SuperAnnotate. Pinterest, Tyzx, IQ Engines, Koozoo, BotSquare/Flutter, MetaMind, Trendage, Center Stage, KiwiBot, WaveOne, DeepScale, Grabango | Co-founder and President of Prompt AI
#Trossen Robotics | Pi Zero (π0) | Open-source vision-language-action model | Designed for general robotic control | Zero-shot learning | Dexterous manipulation | Aloha Kit | Single policy capable of controlling multiple types of robots without retraining | Generalist robotic learning | Pi Zero was trained on diverse robots | Pi Zero was transferred seamlessly to bimanual Aloha platform | Pi Zero executed actions in a zero-shot setting without additional fine-tuning | Pi Zero run on standard computational resources | Hardware: 12th Gen Intel(R) Core(TM) i9-12950HX | NVIDIA RTX A4500 16G | RAM 64G | OS: Ubuntu 22.04 | Dependencies: PyTorch, CUDA, Docker | PaliGemma | Pre-trained Vision-Language Model (VLM) | PaliGemma allows Pi Zero to understand scenes and follow natural language instructions | Image Encoding: Vision Transformer (ViT) to process robot camera feeds | Text Encoding: Converts natural language commands into numerical representation | Fusion: Aligns image features and text embeddings, helping model determine which objects are relevant to task | Pi Zero learns smooth motion trajectories using Flow Matching | Pi Zero learns a velocity field to model how actions should evolve over time | Pi Zero generates entire sequences of movement | Pi Zero predicts multiple future actions in one go | Pi Zero executes actions in chunks | ROS Robot Arms | Aloha Solo package | Intel RealSense cameras | Compact tripod mount | Tripod overhead camera | Ubuntu 22.04 LTS
#MemryX | AI Accelerator Module | Install system software, MemryX SDK, MemryX board | Compile AI model(s) making executable file | Send data & receive results using APIs for AI processing | Up to 6 TFLOPs (1GHz) per chip | Up to 16-chips (96 TOPS/TFLOPs) can be interconnected | Activations: bfloat16 (high accuracy) | Weights: 4 / 8 / 16 bit | Batch = 1 | 10.5M 8-bit parameters (weights) per chip | PCIe Gen 3 I/O | USB 3 interface | 0.6-2W per chip av power | Smart compiler: optimized and automated AI mapping to MemryX hardware | Powerful APIs: Python and C/C++ low and mid-level APIs for AI integration | Runtime: driver and firmware to support Windows or Linux distributions | Bit Accurate simulator: accurately testing models even without MemryX hardware
#Figure AI | Designing robots for the real world | Helix generalist humanoid Vision-Language-Action model reasoning like a human | Figure Exceeds $1B in Series C Funding at $39B Post-Money Valuation | Accelerating efforts to bring general-purpose humanoid robots into real-world environments at scale | Round led by Parkway Venture Capital | Significant investment from Brookfield Asset Management, NVIDIA, Macquarie Capital, Intel Capital, Align Ventures, Tamarack Global, LG Technology Ventures, Salesforce, T-Mobile Ventures, and Qualcomm Ventures | Unlocking the next stage of growth for humanoid robots | Scaling AI platform Helix and BotQ manufacturing | Scaling humanoid robots into homes and commercial operations | Building next-generation GPU infrastructure to accelerate training and simulation | Powering Helix core models for perception, reasoning, and control | Launching advanced data collection of human video and multimodal sensory inputs
#Sevensense | Robot autonomy system combining the benefits of Visual SLAM positioning with advanced AI local perception and navigation tech | Visual Al technology | AI-based autonomy solutions | Visual SLAM | Dynamic obstacle avoidance | Constructing accurate 3D maps of the environment using sensors built into robots | Algorithms precisely localize robot by matching what it observes at any given time with 3D map | Using AI driven perception system robot learns what is around it and predicts people actions to react accordingly | Intelligent path planning makes robot move around static and dynamic obstacles to avoid unnecessary stops | Collaborating with each others robots share important information like their position and changes in mapped environment | Running indoors, outdoors, over ramps and on multiple levels without auxiliary systems | Repeatability of 4mm guarantees precise docking | Updates the map and shares it with the entire fleet | Edge AI: All intelligence is on the vehicle, eliminating any issue related to the loss of connectivity | VDA 5050 standardized interface for AGV communication | Alphasense Autonomy Evaluation Kit | Autonomous mobile robot (AMR) | Hybrid fleets: manual and autonomous systems work collaboratively | Equipping both autonomous and manually operated vehicles with advanced Visual SLAM and AI-powered perception | Workers and AMRs share the same map of the warehouse, with live position data of each of the vehicles | Turning every movement in warehouse into shared spatial awareness that serves operators, machines, and managers alike | Equiping AGVs and other types of wheeled vehicles with multi-camera, industrial-grade Visual SLAM, providing accurate 3D positioning | Combining Visual SLAM with AI-driven 3D perception and navigation | Extending visibility to manually operated vehicles, such as forklifts, tuggers, and other types of industrial trucks | Unifying spatial awareness across fleets | Unlocking operational visibility | Ensuring every movement generates usable data | Providing foundation for smarter, data-driven decision-making | Merging manual and autonomous workflows into a single connected ecosystem | Real-time vehicle tracking | Traffic heatmaps | Spaghetti diagrams | Predictive flow analytics | Redesigning layouts | Optimizing pick paths | Streamlining material handling | Accurate vehicle tracking | Safe-speed enforcement | Pedestrian proximity alerts | Lowerung insurance claims | Ensuring regulatory compliance | Making equipment smarter, scalable, interoperable, and differentiable | Predictive maintenance | Fleet optimization | Visual AI Ecosystem connecting machines, people, processes, and data | Autonomous robotic floor cleaning | Industry 5.0 by adding people-centric approach | Visual AI to providing real-time, people-centric decision-making capabilities as part of autonomous navigation solutions | Collaborative Navigation transforming Autonomous Mobile Robots (AMRs) into mobile cobots | Visual AI confering robots the ability to understand the context of the environment, distinguishing between unobstructed and obstructed paths, categorizing the types of obstacles they encounter, and adapting their behavior dynamically in real-time | Automatically generating complete and very accurate 3D digital twin of an elevator shaft | Autonomous eTrolleys tackling last-mile problem |Autonomous product delivery at airports
#Export-Import Bank of the United States | The official export credit agency of the United States | Supporting American job creation, prosperity and security through exporting | Issuing letters of interest for over $2.2 billion in financing for critical mineral projects | Supply Chain Resiliency Initiative (SCRI) to help secure supply chains of critical minerals and rare earth elements for U.S. businesses | Maintaining access to critical materials to secure U.S. jobs in sectors like battery, automobile, and semiconductor manufacturing | SCRI provides financing for international projects with signed long-term off-take contracts with U.S. companies, providing these U.S. companies with access to critical minerals from partner countries | SCRI: EXIM financing is tied to import authority and the financed amount depends on the amount of the off-take contract between the foreign project and the U.S. importer | Off-take agreements ensure that EXIM financing for critical minerals projects benefits American companies and workers | For U.S. domestic production in critical minerals and rare earth elements, EXIM can provide financing through Make More in America Initiative (MMIA) | SCRI: project must have signed off-take contracts that will result in the critical minerals and rare earth elements output being utilized in the United States, for products that are manufactured in the United States