<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ims.ut.ee/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Siims</id>
	<title>Intelligent Materials and Systems Lab - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://ims.ut.ee/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Siims"/>
	<link rel="alternate" type="text/html" href="https://ims.ut.ee/Special:Contributions/Siims"/>
	<updated>2026-05-16T10:31:49Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.2</generator>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Journal_Club_2013/2014&amp;diff=11337</id>
		<title>Journal Club 2013/2014</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Journal_Club_2013/2014&amp;diff=11337"/>
		<updated>2013-11-26T12:51:25Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Journal Club is a seminar where everybody has a chance to listen and present recent interesting scientific publications on whichever field. Every presentation will be appreciated and rewarded with friendly audience. Sign up! &lt;br /&gt;
&lt;br /&gt;
Location: Institute of Technology, Room 144 &lt;br /&gt;
&lt;br /&gt;
Time: '''Tuesdays at 15:30'''&lt;br /&gt;
&lt;br /&gt;
Leader: Rudolf Kiefer; substitute: Anna-Liisa Peikolainen &lt;br /&gt;
&lt;br /&gt;
=='''October'''==&lt;br /&gt;
{|&lt;br /&gt;
|'''1'''||Friedrich||[http://www.sciencedirect.com/science/article/pii/S1388248113003032# Flexible and wearable graphene/polypyrrole fibers towards multifunctional actuator applications]&lt;br /&gt;
|-&lt;br /&gt;
|'''8'''||Arko||[http://jmd.sagepub.com/content/27/1/5.abstract# My Instructor Made Me Do It]&lt;br /&gt;
|-&lt;br /&gt;
|'''15'''||Tõnis L|| [http://www.sciencedirect.com/science/article/pii/S1090513813000226# The role of facial hair in women's perceptions of men's attractiveness, health, masculinity and parenting abilities]&lt;br /&gt;
|-&lt;br /&gt;
|'''22'''||Veiko||&lt;br /&gt;
|-&lt;br /&gt;
|'''29'''||Simon|| [http://www.sciencemag.org/content/341/6151/1254.full# Interacting Gears Synchronize Propulsive Leg Movements in a Jumping Insect]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=='''November'''==&lt;br /&gt;
{|&lt;br /&gt;
|'''5'''||Heiki||Is Obesity an Addiction?&lt;br /&gt;
|-&lt;br /&gt;
|'''12'''||Sorry, no Journal Club today ||&lt;br /&gt;
|-&lt;br /&gt;
|'''19'''|| '''Šahab''' ||PhD Education ''(!!)''&lt;br /&gt;
|-&lt;br /&gt;
|  ||[[File:PhD.jpg]]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|'''26'''||Siim Schults|| How are connected: Estonia, Nao and Robocup?&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=='''December'''==&lt;br /&gt;
{|&lt;br /&gt;
|'''3'''||Tarmo||&lt;br /&gt;
|-&lt;br /&gt;
|'''10'''|| ||&lt;br /&gt;
|-&lt;br /&gt;
|'''17'''|| ||&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Journal_Club_2013/2014&amp;diff=11336</id>
		<title>Journal Club 2013/2014</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Journal_Club_2013/2014&amp;diff=11336"/>
		<updated>2013-11-26T12:50:45Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* November */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Journal Club is a seminar where everybody has a chance to listen and present recent interesting scientific publications on whichever field. Every presentation will be appreciated and rewarded with friendly audience. Sign up! &lt;br /&gt;
&lt;br /&gt;
Location: Institute of Technology, Room 144 &lt;br /&gt;
&lt;br /&gt;
Time: '''Tuesdays at 15:30'''&lt;br /&gt;
&lt;br /&gt;
Leader: Rudolf Kiefer; substitute: Anna-Liisa Peikolainen &lt;br /&gt;
&lt;br /&gt;
=='''October'''==&lt;br /&gt;
{|&lt;br /&gt;
|'''1'''||Friedrich||[http://www.sciencedirect.com/science/article/pii/S1388248113003032# Flexible and wearable graphene/polypyrrole fibers towards multifunctional actuator applications]&lt;br /&gt;
|-&lt;br /&gt;
|'''8'''||Arko||[http://jmd.sagepub.com/content/27/1/5.abstract# My Instructor Made Me Do It]&lt;br /&gt;
|-&lt;br /&gt;
|'''15'''||Tõnis L|| [http://www.sciencedirect.com/science/article/pii/S1090513813000226# The role of facial hair in women's perceptions of men's attractiveness, health, masculinity and parenting abilities]&lt;br /&gt;
|-&lt;br /&gt;
|'''22'''||Veiko||&lt;br /&gt;
|-&lt;br /&gt;
|'''29'''||Simon|| [http://www.sciencemag.org/content/341/6151/1254.full# Interacting Gears Synchronize Propulsive Leg Movements in a Jumping Insect]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=='''November'''==&lt;br /&gt;
{|&lt;br /&gt;
|'''5'''||Heiki||Is Obesity an Addiction?&lt;br /&gt;
|-&lt;br /&gt;
|'''12'''||Sorry, no Journal Club today ||&lt;br /&gt;
|-&lt;br /&gt;
|'''19'''|| '''Šahab''' ||PhD Education ''(!!)''&lt;br /&gt;
|-&lt;br /&gt;
|  ||[[File:PhD.jpg]]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|'''26'''||Siim Schults|| How are connected: Estonia, Naos and Robocup?&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=='''December'''==&lt;br /&gt;
{|&lt;br /&gt;
|'''3'''||Tarmo||&lt;br /&gt;
|-&lt;br /&gt;
|'''10'''|| ||&lt;br /&gt;
|-&lt;br /&gt;
|'''17'''|| ||&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Student_projects&amp;diff=11331</id>
		<title>Student projects</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Student_projects&amp;diff=11331"/>
		<updated>2013-11-25T20:08:50Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Robootika */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:IMS poster.png|300px|right]]&lt;br /&gt;
&lt;br /&gt;
''Siin on mõned tegemised, mide meie uurimisgrupi juures on võimalik teha. Tegemist pole lõpliku nimekirjaga ning head tegijad on alati oodatud huvitavate ideedega. Kõikidest teemadest on võimalik edasi minna kuni PhD kaitsmiseni.''&lt;br /&gt;
''Tegijad, kes teevad oma töö hindele A, saavad ka väärilise töötasu.'' ''Huvi korral [[User:Alvo#Contacts|võta ühendust]]''. Mõnede teemade kirjeldused on inglise keeles. Nende teemade juhendajateks on külalisteadlased ja -õppejõud.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This is a comment ---&amp;gt;&lt;br /&gt;
= '''Eksperimentaalne materjaliteadus''' =&lt;br /&gt;
== Süsinikelektroodidega polümeersed täiturid==&lt;br /&gt;
&lt;br /&gt;
Kunstlihaseid ehk elektroaktiivseid polümeerseid komposiitmaterjale on väga palju erinevaid. Meie tegeleme madalpingel töötavate ioonsete materjalidega, millel on mitmed eelised kasutamiseks mikroseadmetes ja meditsiinis. Hetkel on uurimisel kaks suunda. Esimese eesmärk on arendada kunstlihasetes kasutatavaid ioonvedelik-süsinik-polümeer komposiite, kasutades selleks erinevaid süsinikke— süsinikaerogeeli, karbiidset süsinikku, süsiniknanotorusid jpt. Teine suund keskendub uute kunstlihase valmistamise tehnoloogiate rakendamisele. Töö eesmärgiks on valmistada erinevad aktuaator-sensormaterjalid, uurida nende valmistamise võimalusi ja nende omadusi.&lt;br /&gt;
&lt;br /&gt;
==Kunstlihased kosmoserakendustes==&lt;br /&gt;
&lt;br /&gt;
Meie poolt valmistatavad materjalid on kerged ning juhitavad madalate elektripingetega. Seetõttu pakuvad nad huvi kosmosetehnoloogia seadmete valmistajatele.&lt;br /&gt;
Töö eesmärgiks on uurida kiirguse, temperatuuri jpt kosmoses materjalidele mõjuvate kahjustavate toimete mõju.&lt;br /&gt;
&lt;br /&gt;
==Juhtivpolümeeridel põhinevate mitmekihiliste kunstlihaste valmistamine ja iseloomustamine==&lt;br /&gt;
&lt;br /&gt;
Kunstlihased, sensorid ja energiahõiveseadmed on elektritjuhtivate orgaaniliste polümeeride uudsemateks ja põnevamateks arengusuundadeks.  Neid loodetakse kasutada meditsiinis, robootikas,  kosmose- ja militaartööstuses.  Enne laiaulatuslikku kasutuselevõttu on siiski vaja veel teha hulk arendustööd.  Mitmekihilise disain loob eeldused juhtivpolümeerse materjali paremaks kontrollimiseks ning tema omaduste parandamiseks.  TÜ IMS laboris on välja töötatud uudsed sünteesimeetodid metallivabade kunstlihaste valmistamiseks.  Senistel lihtsa ühekihilise struktuuriga materjalidel on mitmeid puudusi (juhtuvuse langus, tundlikus väliskeskkonna mõjudele). Aktuatsiooni tekitavale polümeerikihile vastupidise ioonliikuvusega kihtide lisamine loob eelduse neid puudusi vältida.&lt;br /&gt;
&lt;br /&gt;
==Design of actuator performance ==&lt;br /&gt;
(EAP development, technology), Master or bachelor student&lt;br /&gt;
The focus of actuator research mainly based on actuator preparation in view of  new devices. Carbide derived carbon materials is applied as actuator and conductive material to deposit conducting polymer on it. The interaction between these materials are object of the actuator studies with main goal to optimize actuator strain and stress. Improvement of actuator devices over additional chemical modification (hydrophobic or hydrophilic coatings) are included in the new design of actuator performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Flexible autofocus fluid lens device development for application at invisible shirt technology==&lt;br /&gt;
IT, Technology andmaterial scientist, PhD student or 1 Master student &lt;br /&gt;
Autofocus fluid lens device based on a formed interface between oil and water forming a lens, which change their form (concave  or convex) under applied electric field. A new design based on conducting polymer actuators and modification thereof, changing the interface between oil and electrolyte over membrane actuation, which required less energy for application in portable devices (cell phone, laptops).  The device was constructed for testing the membrane actuator. To apply different EAP actuators on it and minimize the device, different work need to be done in installing electronic control and testing actuator membranes. Flexible fluid lens is the next step to integrate it in smart shirt technology obtaining invisible shirts (optical effect).&lt;br /&gt;
&lt;br /&gt;
==Scanning ionic conductance microscopy (SICM)==&lt;br /&gt;
1 PhD or 2 master/bachelor students, IT, physics, material science, technology&lt;br /&gt;
SICM is a new instrument to measure ion movement on surface of conductive material. Double micro pipettes filled with electrolyte connecting the conductive sample and current in nano and pico level are possible to obtain. We are looking for students to make the SICM instrument applicable for actuator measurements to obtain more information of charging/discharging mechanism. With implementing electrochemical method on SICM we also want to establish micro-polymerization of conducting polymers on new smart devices. The main part for IT students is to help us to get the SICM instrument in operation mode. For a student in material science, chemistry or physics we want to investigate in situ actuation modes of actuator sample to get more information how ions trnsport inside of the electro active polymers.&lt;br /&gt;
&lt;br /&gt;
==Nanobubble formation mini device construction==&lt;br /&gt;
Technology, Surface science, PhD or Master or Bachelor student&lt;br /&gt;
Nanobubbles (oxygen or ozone) in aqueous solution can be obtained over bursting of micro-bubbles and the goal of this project is to obtain such device which can be deducted between a water stream. The application of this device is focus on cleaning purpose of solar cells which is still a not solved topic in the market. The harvest of energy in solar cells decrease after 5 years in the range between 20-40 percent of unclean solar panels (dust and dirt). To find a simple not environmental damaging method is one of the reason applying just water and air in Nanobubble formation and cleaning functionality. The project focus on future collaboration with solar-companies, cleaning and fumigations purpose.&lt;br /&gt;
&lt;br /&gt;
==Süsinikelektroodidega täiturmaterjali tööstusliku tootmise ettevalmistamine==&lt;br /&gt;
&lt;br /&gt;
Projekti sisuks on välja töötada materjal ja metoodika kuidas valmistada süsinikelektroodidega täitureid tööstuslikke protsesse kasutades. Töö laiem eesmärk on selliste materjalide masstootmine. &lt;br /&gt;
&lt;br /&gt;
= '''Arvutieksperimendid ja materjalide simuleerimine''' =&lt;br /&gt;
&lt;br /&gt;
==Materjalidefektide simuleerimine kõrgsageduslikes elektriväljades==&lt;br /&gt;
[[Image:Reklaamposter.png|right|thumb|400px]]&lt;br /&gt;
Nutikas tudeng, kes sa tunned huvi tänapäeva tippteaduse vastu ning soovid oma lõputööd teha CERN-iga seotud teemal ning tegutsedes CERN-is! Võta ühendust ning osale uue CERN-is baseeruva kiirendi väljatöötamisel! '''(Doktoritöö võimalus!)'''&lt;br /&gt;
&lt;br /&gt;
Kompaktne lineaarpõrguti (CLIC) on CERN-is arendatav uue põlvkonna lineaarkiirendi, kus osakeste kiirendamine toimub sirgjoonelistel trajektooridel. Planeeritav seade on 50 km pikk ning sellega jõutakse  energiateni 0.5 TeV - 5 TeV. Saavutamaks sellist energiat, kasutatakse kiirendavat elektrivälja, mis ulatub 100-150 MV/m. Sellistes kõrgetes elektriväljades avaldub olulise probleemina aga sage elektriliste läbilöökide tekkimine kiirendi elektroodidel. &lt;br /&gt;
&lt;br /&gt;
Läbilöögid avalduvad vaakumkaartena (kaarlahendus vaakumis), ning  üldiselt eeldatakse, et vaakumkaar algab elektrivälja võimendavate nanoskaalas olevatelt nõelasarnastelt pinnadefektidelt, nende pinnadefektide tekkemehhanism on ebaselge. '''Elektriliste läbilöökide kahandamine alla kriitilise piiri on keskse tähtsusega probleemiks CLIC-i ehitamisel!'''&lt;br /&gt;
 &lt;br /&gt;
Üks lubavamaid meetodeid kiirendi struktuuri parandamiseks on uute materjalide leidmine, mis suudavad taluda kõrgeid elektrivälju ning kiireid elektriväljade muutusi. Võtmeprobleemiks uute materjalide leidmisel on arusaamine füüsikalistest protsessidest, mis toimuvad materjalis läbilöögi eel ning ajal. Uurimustöös kasutatakse erinevaid arvutusmeetodeid, nagu '''molekulaardünaamika, lõplike elementide meetod ja kineetiline Monte-Carlo''', selgitamaks elektriliste läbilöökideni viivate pinnadefektide tekkepõhjuseid. Töös vajalike aruvutisimulatsioonide läbiviimine tähendab, et suures plaanis kasutatakse nn. '''„multiscale“ simulatsioone''', millega kaetakse materjalide simuleerimine alates atomistlikust skaalast kuni makroskaalani.  &lt;br /&gt;
&lt;br /&gt;
Lineaarkiirendi rakendusvaldkondadeks on näiteks  standardmudeli järgne füüsika (physics beyond the standard model), Higgsi bosoni täppismõõtmised ning meditsiinilised valdkonnad, nagu näiteks vähiravi.&lt;br /&gt;
&lt;br /&gt;
==Kunstlihaste materjalide uurimine erinevate arvutisimulatsioonimeetodite abil==&lt;br /&gt;
&lt;br /&gt;
* Tegemist on materjaliga, mida välise elektriväljaga on võimalik panna kuju muutma: painduma, punduma, kokku tõmbuma - nagu teeb reaalne lihas&lt;br /&gt;
* kunstlihase materjal võib ka reageerida välisele mehaanilisele kujumuutusele elektrilise signaaliga&lt;br /&gt;
* kunstlihas tegutseb hääletult, olles ise mõõtmetelt väga väike&lt;br /&gt;
* kunstlihase materjalidena uuritakse selliseid &amp;quot;hitte&amp;quot; nagu grafeen ja ioonvedelik&lt;br /&gt;
* arvutisimulatsioonid viivad sind materjali &amp;quot;sisse&amp;quot;, võimaldades näha seda, mis katses jääb varju, anda infot toimuvate protsesside kohta ja näpunäiteid materjalide parendamiseks&lt;br /&gt;
* tahad teda, kuidas liigutab 2 cm pikkune riba kunstlihast? võta lõplike lementide meetod ja sa näed ära pinged ja deformatsioonid kujumuutmisel&lt;br /&gt;
* tahad teada, kuidas elektroodide kuju muutmine mõjutab liitiumioonaku mahtuvust - seda, kui kaua sinu elektriauto mööda Tartu-Tallinna maanteed suudaks kihutada? võta lõplike elementide meetod ja sa saad välja arvutada aku tühjenemise kiiruse sinu elektriauto toitmisel&lt;br /&gt;
* tahad teada, kuidas liiguvad ja mõjutavad üksteist aatomid ja molekulid kunstlihases ja liitiumioonaku elektroodides ning elektrolüüdis? võta molekulaardünaamiline simulatsioon ja sa saad siseneda maailma, mis on 10000 korda väiksem sinu juuksekarva läbimõõdust&lt;br /&gt;
* tahad virtuaalselt istuda iga aatomi peal ja näha, kuidas ühe aatomi elektronpilv lööb teise oma segamini? võta kvantkeemiline molekulaardünaamika ja sinu sõit lainefunktsioonide harjadel on pöörasem kui Ristna neemel Katja ajal.&lt;br /&gt;
&lt;br /&gt;
==Liitium-ioon akude arhitektuuri optimeerimine arvutisimulatsioonide abil==&lt;br /&gt;
&lt;br /&gt;
Kaasaskantav mikroakutoide on oluliseks faktoriks paljudes arenevates tehnoloogiasuundades, kuna mikroelektroonika mõõtmete vähenemine on jätnud kaugele seljataha väikesemõõduliste vooluallikate arengu. Sobivate kaasaskantavate vooluallikate vähene energiamahtuvus on saamas takistuseks mitmete tehnoloogiasuundade nagu kaasaskantavate arvutusseadmete (Weareable Computing Technology e. WCT), mikroelektromehaaniliste seadmete (MEMS), biomeditsiiniliste mikromasinate arengus. Üheks võtmeprobleemiks selliste seadmete edukaks toimimiseks on nende varustamine vooluallikatega, mis ühelt küljelt tagavad seadme piisava energiahulgaga varustamise ning teiselt küljelt, on võimalikult väikesemõõduised ning kergekaalulised. Sellise konfiguratsiooni juures tulevad ilmsiks olemasolevate, olemuselt kahemõõtmeliste (2D) liitium-ioonakude puudused – nii väikeste ruum- ja pindalade puhul ei ole võimalik saavutada piisavaid energiatihedusi. Seda probleemi võimaldab lahendada 3D mikroakude (MB) kasutusele võtmine. &lt;br /&gt;
Liitiumioonakude arhitektuuri optimeerimise eesmärgiks on valmistada töötav 3D-MB, mille energiatihedus ning mahtuvus on vähemalt suurusjärgu võrra suuremad praegu kasutusel olevate akude omadest. Toimiva 3D-MB välja töötamiseks arendatakse ja uuritakse erinevaid mikroaku arhitektuure, neist sobiva väljavalimist ning optimeerimist lihtsustavad oluliselt teoreetilised, arvutisimulatsioonidega läbi viidavad uuringud, mis võimaldavad testida erinevaid 3D-MB arhitektuure, lahendada optimeerimisülesandeid elektroodide optimaalse geomeetria leidmiseks; optimeerida elektroodi pinda; uurida terve aku käitumist laadimisel-tühjakslaadimisel; optimeerida sobivaid mikroaku arhitektuure. &lt;br /&gt;
Meetodid makrotasandis, mida selliste uuringute läbiviimiseks kasutatakse on lõplike elementide meetod (LEM) ning mikrotasandil molekulaardünaamilise simulatsiooni meetod (MD). Simulatsioonide läbiviimiseks kasutatakse LEM-i puhul tarkvarapakette COMSOL Multiphysics ja Elmer ning MD puhul tarkvarapaketti dl_poly.&lt;br /&gt;
&lt;br /&gt;
= '''Aktuaatorid, seadmed ja nende juhtimine''' =&lt;br /&gt;
==IPMC elektromehhaanilisi omadusi uuriva seadme juhtimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on koostada eksperimentaalne seade, mis mõõdab elektroaktiivsete polümeeride elektromehaanilisi omadusi. Materjale kasutatakse kunstlihastena erinevates rakendustes. Töö tulemuseks peab valmima moodul, mis võimaldab seadet juhtuda USB kaudu.&lt;br /&gt;
&lt;br /&gt;
==IPMC täitureid kasutava autonoomse seadme konstrueerimine==&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on nn kunstlihaeid kasutavate materjalide abil liikuvate autonoomsete seadmete konstrueerimine ning töö kirjeldamine. Valik ideid: &amp;quot;putukas&amp;quot;, ratas, minipurilennuk, mikrohumanoid jne.&lt;br /&gt;
&lt;br /&gt;
==Süsinik-polümeermaterjalidest täiturite juhtimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on parametriseerida ning uurida materjaliteadlaste poolt laboris loodud uudsete materjalide elektromehaanilisi omadusi. St. vajalike elektromehaaniliste ja füüsikaliskeemiliste mudelite loomine, nende mudelite kirjeldamine ning eksperimentaalsete tulemuste vastu kinnitamine. Töö sobib (erinevates mahtudes) bakalaureus, magistri ja doktoritöödeks. Vajalik on võõrkeele oskus ning soov ja võimalus töötada aegajalt erinevates laborites välismaa ülikoolides.&lt;br /&gt;
&lt;br /&gt;
==IPMC/süsinik polümeermaterjalidest energiakogujate uurimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on parametriseerida ning uurida materjaliteadlaste poolt laboris loodud uudsete materjalide elektromehaanilisi omadusi eesmärgiga vinkeskkonnas olevate vibratsioonidest saadav energia muundada elektrienergiaks. Töö kujutab endast vajalike elektromehaaniliste ja füüsikaliskeemiliste mudelite loomist, nende mudelite kirjeldamine nin eksperimentaalsete tulemuste vastu kinnitamine. Töö sobib (erinevates mahtudes) bakalaureuse, magistri ja doktoritöödeks. Vajalik on võõrkeele oskus ning soov ja võimalus töötada aegajalt erinevates laborites välismaa ülikoolides.&lt;br /&gt;
&lt;br /&gt;
== Lahedad ideed kunstlihaste rakendamiseks==&lt;br /&gt;
=== Ilmekas uksekoputi ===&lt;br /&gt;
&lt;br /&gt;
Teha kunstlihastest ilmekas uksekoputi, vt. http://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=-Kee7iyp_3U&amp;amp;list=TLC5Famb33RxE&lt;br /&gt;
&lt;br /&gt;
===Fokuseeritava läätsesüsteemi konstrueerimine ja prototüüpimine===&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on ehitada lihtne prototüüp, mis suudab vedeliku rõhuga manipuleerides muuta pehme läätse fooksukaugust. Aktiivseks elemendiks on süsinik-polümeer materjalist valmistatud täitur ehk nn kunstlihas.&lt;br /&gt;
&lt;br /&gt;
=== Tigu===&lt;br /&gt;
&lt;br /&gt;
Ehitada lihtne prototüüp, mis liigub teo põhimõttel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= '''Signal and Image Processing''' ''(Signaali ja Pilditöötluse)'' =&lt;br /&gt;
&lt;br /&gt;
==Image Processing Apps for Andriod Devices==&lt;br /&gt;
&lt;br /&gt;
Converting the camera of your Android device to a cool semi-professional camera, by introducing various image processing tools and options, such as '''filtering''', '''illumination enhancement''', '''histogram representation for better shots''', and many more options. In line with this application, we will develop a '''photo slide player''' which will smartly choose and plays some musics which fits the photo.&lt;br /&gt;
&lt;br /&gt;
You should have ''good knowledge of programming Android devices'' before you start the project.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TMS320C6713 DSP Board==&lt;br /&gt;
&lt;br /&gt;
The TMS320C6713 device composes the floating-point DSP generation in the TMS320C6000™ DSP platform. The C6713 device is based on the high-performance, advanced very-long-instruction-word (VLIW) architecture developed by Texas Instruments (TI), making this DSP an excellent choice for multichannel and multifunction applications. The C6713 has a rich peripheral set that includes two Multichannel Audio Serial Ports (McASPs), two Multichannel Buffered Serial Ports (McBSPs), two Inter-Integrated Circuit (I2C) buses, one dedicated General-Purpose Input/Output (GPIO) module, two general-purpose timers, a host-port interface (HPI), and a glueless external memory interface (EMIF) capable of interfacing to SDRAM, SBSRAM, and asynchronous peripherals. &lt;br /&gt;
&lt;br /&gt;
You can ''design'' your own highpass, lowpass, bandpass, and bandstop ''filter'', ''denoise'' your input voice signal, add or cancel ''echo'', as well as introduce positive and negative ''feedback'' to your voice signal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Iris Recognition System Using Different Colour Channel Statistics==&lt;br /&gt;
&lt;br /&gt;
The proposed project introduces a novel system for person identification using iris recognition. The system is fast, efficient and reliable, which uses the data acquired from the iris images for the identification purposes. The system uses not only the luminance but also the chrominance information of the iris images. Irises of different people contain distinctive patterns both in luminance and chrominance domains. The proposed system explores the pixel statistics of these distinctive patterns for high performance iris recognition. The system contains image acquisition devices such as cameras and necessary interface cards, as well as, a computer system that accommodates the developed algorithms for the recognition of persons based on their iris patterns. The proposed system differs in methodology and performance from the existing iris recognition systems which are mainly using the algorithm introduced by John Daugman. Most of the presented methods in the literature follow the approach of Daugman which consider only the luminance information of the iris images, however we propose to include the chrominance information acquired from different colour channels such as Hue and Saturation. The proposed system can be used for security systems including secure entrance into the important buildings such as banks, treasury and airports to increase the security. The system can easily be used at the border check points to speed up the unnecessary queues at the check points.&lt;br /&gt;
&lt;br /&gt;
= '''Robootika''' =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aldebaran Nao rakendamine==&lt;br /&gt;
&lt;br /&gt;
Aldebaran Nao on poolemeetrine humanoidrobot, kes on varustatud mitmete andurite ja mootoritega. Ülesandeks on Nao rakendamine erinevate vajalike ja huvitavate ülesannete täitmiseks: suhtlemine, jalgpalli mängimine jne. RM/Intel Atom baasil töötav miniarvuti.&lt;br /&gt;
&lt;br /&gt;
=== Robocup 2014 SPL team Philosopher ===&lt;br /&gt;
Team Philosopher is planning to participate in the upcoming Robocup in the standard platform league. RoboCup is an international robotics competition founded in 1997. The aim is to promote robotics and AI research, by offering a publicly appealing, but formidable challenge [http://en.wikipedia.org/wiki/RoboCup]. &lt;br /&gt;
SPL league consists of several competitions which include: 5 vs. 5 soccer, drop-in soccer and technical challenges. League's standard platform this year is Nao robot.&lt;br /&gt;
Members: Gholamreza Anbarjafari, Kristian Hunt, Roland Pihlakas, Viljar Puusepp, and Siim Schults.&lt;br /&gt;
Contact: siimsch@ut.ee&lt;br /&gt;
&lt;br /&gt;
== Glasses for Blind People: An aumatic way of Interacting with Environment for Blind People ==&lt;br /&gt;
The Project consists of several stages and a research folower can focus on of the following modules.&lt;br /&gt;
'''Image Processing Modules:'''&lt;br /&gt;
In this section the captured image via camera will be analysed. Specialy the faces will be detected and extracted, if it is one the faces in the database it will be recognized. In paralel the facial expression of the person will be also extracted. This will help the blid people to find out whether the person who is infront of him/her is happy or sad. This wil increase the quality of communication. Also various image pre-processing such image illumination enhancement is required at this stage which can be proposed and employeed.&lt;br /&gt;
'''GPS Modules:'''&lt;br /&gt;
Using GPS and also possible wireless data communication with a server can help the blind citizens to locate their current location and by using mapping systems such as Google Map they can go to their destination.&lt;br /&gt;
'''Sensor Modules:'''&lt;br /&gt;
In order to speed up some recations such as avoiding obstacle, ultrasonic or optic sensors are being used in this project. The sensors are working independently from the visual systems as their only task is to help the blid person to walk. The project is open various new window in the research, e.g. new algorithms can be added to the computer vision section such as facial expression recognition so a commander voice can inform the blind person current expression of the person in front of him/her. The benefits that this project brings to the society are also significant as communication between blind people.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Õpperobotid==&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on arendada välja meelalahutuslike robootika teemalisi vahendeid lastele nii AHHAA teaduskeskuse kui nn robotiteatri tarbeks. Töö sisaldab seadme konstrueerimist,ning realiseerimist töötava prototüübi kujul. Konkreetseid ideid on mitmeid, kuid uued ideed on oodatud.&lt;br /&gt;
&lt;br /&gt;
= '''Partneritega seotud teemad''' =&lt;br /&gt;
&lt;br /&gt;
==Mehhanoelektriliste andurite uurimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on eksperimentaalselt uurida erinevate mehaanilist liigutust elektriliseks muundavate materjalide omadusi. Töö hõlmab eksperimendi konstrueerimist ja arvutijuhitavate mõõtmiste teostamist LabVIEW keskkonnas. Sobib hästi arvutitehnika, füüsika ja materjaliteaduse tudengitele.&lt;br /&gt;
&lt;br /&gt;
==Robotmannekeen rõivatööstusele==&lt;br /&gt;
&lt;br /&gt;
Projekti eesmärgiks on arendada välja inimkeha kujuline mannekeeni alakeha rõivatööstusele kiirendamaks ning parendamaks disainerrõivaste väljatöötamist.&lt;br /&gt;
Projektis on vaja konstrueerida mehaanika sõlmed, realiseerida elektroonika juhtsõlmed, modeleerida ning luua algoritmid mannekeeni välispinna juhtimiseks ning arendada välja süsteemi kontroll tarkvara. Samuti on vaja arendada välja kasutajatarkvara. Projekti käigus tuleb  koostööd teha mitmete põnevate inimestega, kes on aktiivsed moe- ja rõivatööstuse vallas.&lt;br /&gt;
Sobib mitmeteks bakalaureuse ja magistritöödeks, sõltuvalt tasemest on ka töö maht erinev.&lt;br /&gt;
&lt;br /&gt;
Projektil on ka konkreetne rakendus vt [http://www.fits.me www.fits.me]&lt;br /&gt;
&lt;br /&gt;
==Puutetundlik sensor robotmannekeenile ==&lt;br /&gt;
&lt;br /&gt;
Projekti eesmärgiks on arendada  inimkeha kujuline mannekeenile puutetundlike &amp;quot;naha&amp;quot; välja arendamine.&lt;br /&gt;
Projektis on vaja uurida ja testida erinevaid sensoreid, Leida olulised mõõtevahemikud ja mõõtetäpsused vastavalt vajadusele rakenduses. Projekti käigus tuleb  koostööd teha mitmete põnevate inimestega, kes on aktiivsed moe- ja rõivatööstuse vallas.&lt;br /&gt;
Sobib mitmeteks bakalaureuse ja magistritöödeks, sõltuvalt tasemest on ka töö maht erinev.&lt;br /&gt;
&lt;br /&gt;
= '''Õppetööga seotud''' =&lt;br /&gt;
==Sensori-anduri töö uurimine ja juhendmaterjali koostamine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on eksperimentaalselt parametriseerida robootikas/automaatikas kasutatav sensor/täitur ning tulemuse põhjal koostada protokoll ja metoodika selle kasutamiseks.&lt;br /&gt;
&lt;br /&gt;
== Juhendmaterjali koostamine koolirobootika tarbeks==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on koostada õpetajatele juhendmaterjale ja põnevaid tööülesandeid, aga samuti ülesandeid, mis aitavad lastel õppida füüsikat, matemaatikat, keemiat ja bioloogiat.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Üldine info bakalaureuse- ja magistritöö tegijatele=&lt;br /&gt;
&lt;br /&gt;
Teil on kaks juhendajat. Eeldame, et te vähemalt kord nädalas võtate vähemalt ühe juhendajaga kontakti ja arutate läbi oma mured ja tegemised.&lt;br /&gt;
&lt;br /&gt;
Töö esimene versioon peab olema esitatud hiljemalt 1. maiks. Hilinemiseks sobivad ainult dokumentaalselt tõestatavad meditsiinilised põhjused. Esimene version peab sisaldama:&lt;br /&gt;
# sissejuhatust, mis räägib, miks projekti tulemus on vajalik ja mida teised selles valdkonnas maailmas teinud on;&lt;br /&gt;
# projekti teoreetilisi/matemaatilisi/mudeli aluseid lahti kirjutatuna;&lt;br /&gt;
# tehtud tegevuse detailset kirjeldust (detaile pole kunagi liiga palju, delete on lihtsaim funktsioon, mida juhendaja teie kirjaliku töö ümber kirjutamisel :) teha saab);&lt;br /&gt;
# töö tulemusi, st kas mõõtmistulemusi või seadme töötava! prototüübi tehniline kirjeldust ja seadet ennast;&lt;br /&gt;
# hinnangut oma tööle, st töö tulemuste edasise arengu analüüsi, tulemuste analüüsi ja hinnangut töö tulemuse kvaliteedile.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
[[vanad teemad]]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Student_projects&amp;diff=11330</id>
		<title>Student projects</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Student_projects&amp;diff=11330"/>
		<updated>2013-11-25T20:07:48Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Robocup 2014 SPL */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:IMS poster.png|300px|right]]&lt;br /&gt;
&lt;br /&gt;
''Siin on mõned tegemised, mide meie uurimisgrupi juures on võimalik teha. Tegemist pole lõpliku nimekirjaga ning head tegijad on alati oodatud huvitavate ideedega. Kõikidest teemadest on võimalik edasi minna kuni PhD kaitsmiseni.''&lt;br /&gt;
''Tegijad, kes teevad oma töö hindele A, saavad ka väärilise töötasu.'' ''Huvi korral [[User:Alvo#Contacts|võta ühendust]]''. Mõnede teemade kirjeldused on inglise keeles. Nende teemade juhendajateks on külalisteadlased ja -õppejõud.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This is a comment ---&amp;gt;&lt;br /&gt;
= '''Eksperimentaalne materjaliteadus''' =&lt;br /&gt;
== Süsinikelektroodidega polümeersed täiturid==&lt;br /&gt;
&lt;br /&gt;
Kunstlihaseid ehk elektroaktiivseid polümeerseid komposiitmaterjale on väga palju erinevaid. Meie tegeleme madalpingel töötavate ioonsete materjalidega, millel on mitmed eelised kasutamiseks mikroseadmetes ja meditsiinis. Hetkel on uurimisel kaks suunda. Esimese eesmärk on arendada kunstlihasetes kasutatavaid ioonvedelik-süsinik-polümeer komposiite, kasutades selleks erinevaid süsinikke— süsinikaerogeeli, karbiidset süsinikku, süsiniknanotorusid jpt. Teine suund keskendub uute kunstlihase valmistamise tehnoloogiate rakendamisele. Töö eesmärgiks on valmistada erinevad aktuaator-sensormaterjalid, uurida nende valmistamise võimalusi ja nende omadusi.&lt;br /&gt;
&lt;br /&gt;
==Kunstlihased kosmoserakendustes==&lt;br /&gt;
&lt;br /&gt;
Meie poolt valmistatavad materjalid on kerged ning juhitavad madalate elektripingetega. Seetõttu pakuvad nad huvi kosmosetehnoloogia seadmete valmistajatele.&lt;br /&gt;
Töö eesmärgiks on uurida kiirguse, temperatuuri jpt kosmoses materjalidele mõjuvate kahjustavate toimete mõju.&lt;br /&gt;
&lt;br /&gt;
==Juhtivpolümeeridel põhinevate mitmekihiliste kunstlihaste valmistamine ja iseloomustamine==&lt;br /&gt;
&lt;br /&gt;
Kunstlihased, sensorid ja energiahõiveseadmed on elektritjuhtivate orgaaniliste polümeeride uudsemateks ja põnevamateks arengusuundadeks.  Neid loodetakse kasutada meditsiinis, robootikas,  kosmose- ja militaartööstuses.  Enne laiaulatuslikku kasutuselevõttu on siiski vaja veel teha hulk arendustööd.  Mitmekihilise disain loob eeldused juhtivpolümeerse materjali paremaks kontrollimiseks ning tema omaduste parandamiseks.  TÜ IMS laboris on välja töötatud uudsed sünteesimeetodid metallivabade kunstlihaste valmistamiseks.  Senistel lihtsa ühekihilise struktuuriga materjalidel on mitmeid puudusi (juhtuvuse langus, tundlikus väliskeskkonna mõjudele). Aktuatsiooni tekitavale polümeerikihile vastupidise ioonliikuvusega kihtide lisamine loob eelduse neid puudusi vältida.&lt;br /&gt;
&lt;br /&gt;
==Design of actuator performance ==&lt;br /&gt;
(EAP development, technology), Master or bachelor student&lt;br /&gt;
The focus of actuator research mainly based on actuator preparation in view of  new devices. Carbide derived carbon materials is applied as actuator and conductive material to deposit conducting polymer on it. The interaction between these materials are object of the actuator studies with main goal to optimize actuator strain and stress. Improvement of actuator devices over additional chemical modification (hydrophobic or hydrophilic coatings) are included in the new design of actuator performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Flexible autofocus fluid lens device development for application at invisible shirt technology==&lt;br /&gt;
IT, Technology andmaterial scientist, PhD student or 1 Master student &lt;br /&gt;
Autofocus fluid lens device based on a formed interface between oil and water forming a lens, which change their form (concave  or convex) under applied electric field. A new design based on conducting polymer actuators and modification thereof, changing the interface between oil and electrolyte over membrane actuation, which required less energy for application in portable devices (cell phone, laptops).  The device was constructed for testing the membrane actuator. To apply different EAP actuators on it and minimize the device, different work need to be done in installing electronic control and testing actuator membranes. Flexible fluid lens is the next step to integrate it in smart shirt technology obtaining invisible shirts (optical effect).&lt;br /&gt;
&lt;br /&gt;
==Scanning ionic conductance microscopy (SICM)==&lt;br /&gt;
1 PhD or 2 master/bachelor students, IT, physics, material science, technology&lt;br /&gt;
SICM is a new instrument to measure ion movement on surface of conductive material. Double micro pipettes filled with electrolyte connecting the conductive sample and current in nano and pico level are possible to obtain. We are looking for students to make the SICM instrument applicable for actuator measurements to obtain more information of charging/discharging mechanism. With implementing electrochemical method on SICM we also want to establish micro-polymerization of conducting polymers on new smart devices. The main part for IT students is to help us to get the SICM instrument in operation mode. For a student in material science, chemistry or physics we want to investigate in situ actuation modes of actuator sample to get more information how ions trnsport inside of the electro active polymers.&lt;br /&gt;
&lt;br /&gt;
==Nanobubble formation mini device construction==&lt;br /&gt;
Technology, Surface science, PhD or Master or Bachelor student&lt;br /&gt;
Nanobubbles (oxygen or ozone) in aqueous solution can be obtained over bursting of micro-bubbles and the goal of this project is to obtain such device which can be deducted between a water stream. The application of this device is focus on cleaning purpose of solar cells which is still a not solved topic in the market. The harvest of energy in solar cells decrease after 5 years in the range between 20-40 percent of unclean solar panels (dust and dirt). To find a simple not environmental damaging method is one of the reason applying just water and air in Nanobubble formation and cleaning functionality. The project focus on future collaboration with solar-companies, cleaning and fumigations purpose.&lt;br /&gt;
&lt;br /&gt;
==Süsinikelektroodidega täiturmaterjali tööstusliku tootmise ettevalmistamine==&lt;br /&gt;
&lt;br /&gt;
Projekti sisuks on välja töötada materjal ja metoodika kuidas valmistada süsinikelektroodidega täitureid tööstuslikke protsesse kasutades. Töö laiem eesmärk on selliste materjalide masstootmine. &lt;br /&gt;
&lt;br /&gt;
= '''Arvutieksperimendid ja materjalide simuleerimine''' =&lt;br /&gt;
&lt;br /&gt;
==Materjalidefektide simuleerimine kõrgsageduslikes elektriväljades==&lt;br /&gt;
[[Image:Reklaamposter.png|right|thumb|400px]]&lt;br /&gt;
Nutikas tudeng, kes sa tunned huvi tänapäeva tippteaduse vastu ning soovid oma lõputööd teha CERN-iga seotud teemal ning tegutsedes CERN-is! Võta ühendust ning osale uue CERN-is baseeruva kiirendi väljatöötamisel! '''(Doktoritöö võimalus!)'''&lt;br /&gt;
&lt;br /&gt;
Kompaktne lineaarpõrguti (CLIC) on CERN-is arendatav uue põlvkonna lineaarkiirendi, kus osakeste kiirendamine toimub sirgjoonelistel trajektooridel. Planeeritav seade on 50 km pikk ning sellega jõutakse  energiateni 0.5 TeV - 5 TeV. Saavutamaks sellist energiat, kasutatakse kiirendavat elektrivälja, mis ulatub 100-150 MV/m. Sellistes kõrgetes elektriväljades avaldub olulise probleemina aga sage elektriliste läbilöökide tekkimine kiirendi elektroodidel. &lt;br /&gt;
&lt;br /&gt;
Läbilöögid avalduvad vaakumkaartena (kaarlahendus vaakumis), ning  üldiselt eeldatakse, et vaakumkaar algab elektrivälja võimendavate nanoskaalas olevatelt nõelasarnastelt pinnadefektidelt, nende pinnadefektide tekkemehhanism on ebaselge. '''Elektriliste läbilöökide kahandamine alla kriitilise piiri on keskse tähtsusega probleemiks CLIC-i ehitamisel!'''&lt;br /&gt;
 &lt;br /&gt;
Üks lubavamaid meetodeid kiirendi struktuuri parandamiseks on uute materjalide leidmine, mis suudavad taluda kõrgeid elektrivälju ning kiireid elektriväljade muutusi. Võtmeprobleemiks uute materjalide leidmisel on arusaamine füüsikalistest protsessidest, mis toimuvad materjalis läbilöögi eel ning ajal. Uurimustöös kasutatakse erinevaid arvutusmeetodeid, nagu '''molekulaardünaamika, lõplike elementide meetod ja kineetiline Monte-Carlo''', selgitamaks elektriliste läbilöökideni viivate pinnadefektide tekkepõhjuseid. Töös vajalike aruvutisimulatsioonide läbiviimine tähendab, et suures plaanis kasutatakse nn. '''„multiscale“ simulatsioone''', millega kaetakse materjalide simuleerimine alates atomistlikust skaalast kuni makroskaalani.  &lt;br /&gt;
&lt;br /&gt;
Lineaarkiirendi rakendusvaldkondadeks on näiteks  standardmudeli järgne füüsika (physics beyond the standard model), Higgsi bosoni täppismõõtmised ning meditsiinilised valdkonnad, nagu näiteks vähiravi.&lt;br /&gt;
&lt;br /&gt;
==Kunstlihaste materjalide uurimine erinevate arvutisimulatsioonimeetodite abil==&lt;br /&gt;
&lt;br /&gt;
* Tegemist on materjaliga, mida välise elektriväljaga on võimalik panna kuju muutma: painduma, punduma, kokku tõmbuma - nagu teeb reaalne lihas&lt;br /&gt;
* kunstlihase materjal võib ka reageerida välisele mehaanilisele kujumuutusele elektrilise signaaliga&lt;br /&gt;
* kunstlihas tegutseb hääletult, olles ise mõõtmetelt väga väike&lt;br /&gt;
* kunstlihase materjalidena uuritakse selliseid &amp;quot;hitte&amp;quot; nagu grafeen ja ioonvedelik&lt;br /&gt;
* arvutisimulatsioonid viivad sind materjali &amp;quot;sisse&amp;quot;, võimaldades näha seda, mis katses jääb varju, anda infot toimuvate protsesside kohta ja näpunäiteid materjalide parendamiseks&lt;br /&gt;
* tahad teda, kuidas liigutab 2 cm pikkune riba kunstlihast? võta lõplike lementide meetod ja sa näed ära pinged ja deformatsioonid kujumuutmisel&lt;br /&gt;
* tahad teada, kuidas elektroodide kuju muutmine mõjutab liitiumioonaku mahtuvust - seda, kui kaua sinu elektriauto mööda Tartu-Tallinna maanteed suudaks kihutada? võta lõplike elementide meetod ja sa saad välja arvutada aku tühjenemise kiiruse sinu elektriauto toitmisel&lt;br /&gt;
* tahad teada, kuidas liiguvad ja mõjutavad üksteist aatomid ja molekulid kunstlihases ja liitiumioonaku elektroodides ning elektrolüüdis? võta molekulaardünaamiline simulatsioon ja sa saad siseneda maailma, mis on 10000 korda väiksem sinu juuksekarva läbimõõdust&lt;br /&gt;
* tahad virtuaalselt istuda iga aatomi peal ja näha, kuidas ühe aatomi elektronpilv lööb teise oma segamini? võta kvantkeemiline molekulaardünaamika ja sinu sõit lainefunktsioonide harjadel on pöörasem kui Ristna neemel Katja ajal.&lt;br /&gt;
&lt;br /&gt;
==Liitium-ioon akude arhitektuuri optimeerimine arvutisimulatsioonide abil==&lt;br /&gt;
&lt;br /&gt;
Kaasaskantav mikroakutoide on oluliseks faktoriks paljudes arenevates tehnoloogiasuundades, kuna mikroelektroonika mõõtmete vähenemine on jätnud kaugele seljataha väikesemõõduliste vooluallikate arengu. Sobivate kaasaskantavate vooluallikate vähene energiamahtuvus on saamas takistuseks mitmete tehnoloogiasuundade nagu kaasaskantavate arvutusseadmete (Weareable Computing Technology e. WCT), mikroelektromehaaniliste seadmete (MEMS), biomeditsiiniliste mikromasinate arengus. Üheks võtmeprobleemiks selliste seadmete edukaks toimimiseks on nende varustamine vooluallikatega, mis ühelt küljelt tagavad seadme piisava energiahulgaga varustamise ning teiselt küljelt, on võimalikult väikesemõõduised ning kergekaalulised. Sellise konfiguratsiooni juures tulevad ilmsiks olemasolevate, olemuselt kahemõõtmeliste (2D) liitium-ioonakude puudused – nii väikeste ruum- ja pindalade puhul ei ole võimalik saavutada piisavaid energiatihedusi. Seda probleemi võimaldab lahendada 3D mikroakude (MB) kasutusele võtmine. &lt;br /&gt;
Liitiumioonakude arhitektuuri optimeerimise eesmärgiks on valmistada töötav 3D-MB, mille energiatihedus ning mahtuvus on vähemalt suurusjärgu võrra suuremad praegu kasutusel olevate akude omadest. Toimiva 3D-MB välja töötamiseks arendatakse ja uuritakse erinevaid mikroaku arhitektuure, neist sobiva väljavalimist ning optimeerimist lihtsustavad oluliselt teoreetilised, arvutisimulatsioonidega läbi viidavad uuringud, mis võimaldavad testida erinevaid 3D-MB arhitektuure, lahendada optimeerimisülesandeid elektroodide optimaalse geomeetria leidmiseks; optimeerida elektroodi pinda; uurida terve aku käitumist laadimisel-tühjakslaadimisel; optimeerida sobivaid mikroaku arhitektuure. &lt;br /&gt;
Meetodid makrotasandis, mida selliste uuringute läbiviimiseks kasutatakse on lõplike elementide meetod (LEM) ning mikrotasandil molekulaardünaamilise simulatsiooni meetod (MD). Simulatsioonide läbiviimiseks kasutatakse LEM-i puhul tarkvarapakette COMSOL Multiphysics ja Elmer ning MD puhul tarkvarapaketti dl_poly.&lt;br /&gt;
&lt;br /&gt;
= '''Aktuaatorid, seadmed ja nende juhtimine''' =&lt;br /&gt;
==IPMC elektromehhaanilisi omadusi uuriva seadme juhtimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on koostada eksperimentaalne seade, mis mõõdab elektroaktiivsete polümeeride elektromehaanilisi omadusi. Materjale kasutatakse kunstlihastena erinevates rakendustes. Töö tulemuseks peab valmima moodul, mis võimaldab seadet juhtuda USB kaudu.&lt;br /&gt;
&lt;br /&gt;
==IPMC täitureid kasutava autonoomse seadme konstrueerimine==&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on nn kunstlihaeid kasutavate materjalide abil liikuvate autonoomsete seadmete konstrueerimine ning töö kirjeldamine. Valik ideid: &amp;quot;putukas&amp;quot;, ratas, minipurilennuk, mikrohumanoid jne.&lt;br /&gt;
&lt;br /&gt;
==Süsinik-polümeermaterjalidest täiturite juhtimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on parametriseerida ning uurida materjaliteadlaste poolt laboris loodud uudsete materjalide elektromehaanilisi omadusi. St. vajalike elektromehaaniliste ja füüsikaliskeemiliste mudelite loomine, nende mudelite kirjeldamine ning eksperimentaalsete tulemuste vastu kinnitamine. Töö sobib (erinevates mahtudes) bakalaureus, magistri ja doktoritöödeks. Vajalik on võõrkeele oskus ning soov ja võimalus töötada aegajalt erinevates laborites välismaa ülikoolides.&lt;br /&gt;
&lt;br /&gt;
==IPMC/süsinik polümeermaterjalidest energiakogujate uurimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on parametriseerida ning uurida materjaliteadlaste poolt laboris loodud uudsete materjalide elektromehaanilisi omadusi eesmärgiga vinkeskkonnas olevate vibratsioonidest saadav energia muundada elektrienergiaks. Töö kujutab endast vajalike elektromehaaniliste ja füüsikaliskeemiliste mudelite loomist, nende mudelite kirjeldamine nin eksperimentaalsete tulemuste vastu kinnitamine. Töö sobib (erinevates mahtudes) bakalaureuse, magistri ja doktoritöödeks. Vajalik on võõrkeele oskus ning soov ja võimalus töötada aegajalt erinevates laborites välismaa ülikoolides.&lt;br /&gt;
&lt;br /&gt;
== Lahedad ideed kunstlihaste rakendamiseks==&lt;br /&gt;
=== Ilmekas uksekoputi ===&lt;br /&gt;
&lt;br /&gt;
Teha kunstlihastest ilmekas uksekoputi, vt. http://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=-Kee7iyp_3U&amp;amp;list=TLC5Famb33RxE&lt;br /&gt;
&lt;br /&gt;
===Fokuseeritava läätsesüsteemi konstrueerimine ja prototüüpimine===&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on ehitada lihtne prototüüp, mis suudab vedeliku rõhuga manipuleerides muuta pehme läätse fooksukaugust. Aktiivseks elemendiks on süsinik-polümeer materjalist valmistatud täitur ehk nn kunstlihas.&lt;br /&gt;
&lt;br /&gt;
=== Tigu===&lt;br /&gt;
&lt;br /&gt;
Ehitada lihtne prototüüp, mis liigub teo põhimõttel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= '''Signal and Image Processing''' ''(Signaali ja Pilditöötluse)'' =&lt;br /&gt;
&lt;br /&gt;
==Image Processing Apps for Andriod Devices==&lt;br /&gt;
&lt;br /&gt;
Converting the camera of your Android device to a cool semi-professional camera, by introducing various image processing tools and options, such as '''filtering''', '''illumination enhancement''', '''histogram representation for better shots''', and many more options. In line with this application, we will develop a '''photo slide player''' which will smartly choose and plays some musics which fits the photo.&lt;br /&gt;
&lt;br /&gt;
You should have ''good knowledge of programming Android devices'' before you start the project.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TMS320C6713 DSP Board==&lt;br /&gt;
&lt;br /&gt;
The TMS320C6713 device composes the floating-point DSP generation in the TMS320C6000™ DSP platform. The C6713 device is based on the high-performance, advanced very-long-instruction-word (VLIW) architecture developed by Texas Instruments (TI), making this DSP an excellent choice for multichannel and multifunction applications. The C6713 has a rich peripheral set that includes two Multichannel Audio Serial Ports (McASPs), two Multichannel Buffered Serial Ports (McBSPs), two Inter-Integrated Circuit (I2C) buses, one dedicated General-Purpose Input/Output (GPIO) module, two general-purpose timers, a host-port interface (HPI), and a glueless external memory interface (EMIF) capable of interfacing to SDRAM, SBSRAM, and asynchronous peripherals. &lt;br /&gt;
&lt;br /&gt;
You can ''design'' your own highpass, lowpass, bandpass, and bandstop ''filter'', ''denoise'' your input voice signal, add or cancel ''echo'', as well as introduce positive and negative ''feedback'' to your voice signal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Iris Recognition System Using Different Colour Channel Statistics==&lt;br /&gt;
&lt;br /&gt;
The proposed project introduces a novel system for person identification using iris recognition. The system is fast, efficient and reliable, which uses the data acquired from the iris images for the identification purposes. The system uses not only the luminance but also the chrominance information of the iris images. Irises of different people contain distinctive patterns both in luminance and chrominance domains. The proposed system explores the pixel statistics of these distinctive patterns for high performance iris recognition. The system contains image acquisition devices such as cameras and necessary interface cards, as well as, a computer system that accommodates the developed algorithms for the recognition of persons based on their iris patterns. The proposed system differs in methodology and performance from the existing iris recognition systems which are mainly using the algorithm introduced by John Daugman. Most of the presented methods in the literature follow the approach of Daugman which consider only the luminance information of the iris images, however we propose to include the chrominance information acquired from different colour channels such as Hue and Saturation. The proposed system can be used for security systems including secure entrance into the important buildings such as banks, treasury and airports to increase the security. The system can easily be used at the border check points to speed up the unnecessary queues at the check points.&lt;br /&gt;
&lt;br /&gt;
= '''Robootika''' =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aldebaran Nao rakendamine==&lt;br /&gt;
&lt;br /&gt;
Aldebaran Nao on poolemeetrine humanoidrobot, kes on varustatud mitmete andurite ja mootoritega. Ülesandeks on Nao rakendamine erinevate vajalike ja huvitavate ülesannete täitmiseks: suhtlemine, jalgpalli mängimine jne. RM/Intel Atom baasil töötav miniarvuti.&lt;br /&gt;
&lt;br /&gt;
=== Robocup 2014 SPL ===&lt;br /&gt;
Team Philosopher is planning to participate in the upcoming Robocup in the standard platform league. RoboCup is an international robotics competition founded in 1997. The aim is to promote robotics and AI research, by offering a publicly appealing, but formidable challenge [http://en.wikipedia.org/wiki/RoboCup]. &lt;br /&gt;
SPL league consists of several competitions which include: 5 vs. 5 soccer, drop-in soccer and technical challenges. League's standard platform this year is Nao robot.&lt;br /&gt;
Members: Gholamreza Anbarjafari, Kristian Hunt, Roland Pihlakas, Viljar Puusepp, and Siim Schults.&lt;br /&gt;
Contact: siimsch@ut.ee&lt;br /&gt;
&lt;br /&gt;
== Glasses for Blind People: An aumatic way of Interacting with Environment for Blind People ==&lt;br /&gt;
The Project consists of several stages and a research folower can focus on of the following modules.&lt;br /&gt;
'''Image Processing Modules:'''&lt;br /&gt;
In this section the captured image via camera will be analysed. Specialy the faces will be detected and extracted, if it is one the faces in the database it will be recognized. In paralel the facial expression of the person will be also extracted. This will help the blid people to find out whether the person who is infront of him/her is happy or sad. This wil increase the quality of communication. Also various image pre-processing such image illumination enhancement is required at this stage which can be proposed and employeed.&lt;br /&gt;
'''GPS Modules:'''&lt;br /&gt;
Using GPS and also possible wireless data communication with a server can help the blind citizens to locate their current location and by using mapping systems such as Google Map they can go to their destination.&lt;br /&gt;
'''Sensor Modules:'''&lt;br /&gt;
In order to speed up some recations such as avoiding obstacle, ultrasonic or optic sensors are being used in this project. The sensors are working independently from the visual systems as their only task is to help the blid person to walk. The project is open various new window in the research, e.g. new algorithms can be added to the computer vision section such as facial expression recognition so a commander voice can inform the blind person current expression of the person in front of him/her. The benefits that this project brings to the society are also significant as communication between blind people.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Õpperobotid==&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on arendada välja meelalahutuslike robootika teemalisi vahendeid lastele nii AHHAA teaduskeskuse kui nn robotiteatri tarbeks. Töö sisaldab seadme konstrueerimist,ning realiseerimist töötava prototüübi kujul. Konkreetseid ideid on mitmeid, kuid uued ideed on oodatud.&lt;br /&gt;
&lt;br /&gt;
= '''Partneritega seotud teemad''' =&lt;br /&gt;
&lt;br /&gt;
==Mehhanoelektriliste andurite uurimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on eksperimentaalselt uurida erinevate mehaanilist liigutust elektriliseks muundavate materjalide omadusi. Töö hõlmab eksperimendi konstrueerimist ja arvutijuhitavate mõõtmiste teostamist LabVIEW keskkonnas. Sobib hästi arvutitehnika, füüsika ja materjaliteaduse tudengitele.&lt;br /&gt;
&lt;br /&gt;
==Robotmannekeen rõivatööstusele==&lt;br /&gt;
&lt;br /&gt;
Projekti eesmärgiks on arendada välja inimkeha kujuline mannekeeni alakeha rõivatööstusele kiirendamaks ning parendamaks disainerrõivaste väljatöötamist.&lt;br /&gt;
Projektis on vaja konstrueerida mehaanika sõlmed, realiseerida elektroonika juhtsõlmed, modeleerida ning luua algoritmid mannekeeni välispinna juhtimiseks ning arendada välja süsteemi kontroll tarkvara. Samuti on vaja arendada välja kasutajatarkvara. Projekti käigus tuleb  koostööd teha mitmete põnevate inimestega, kes on aktiivsed moe- ja rõivatööstuse vallas.&lt;br /&gt;
Sobib mitmeteks bakalaureuse ja magistritöödeks, sõltuvalt tasemest on ka töö maht erinev.&lt;br /&gt;
&lt;br /&gt;
Projektil on ka konkreetne rakendus vt [http://www.fits.me www.fits.me]&lt;br /&gt;
&lt;br /&gt;
==Puutetundlik sensor robotmannekeenile ==&lt;br /&gt;
&lt;br /&gt;
Projekti eesmärgiks on arendada  inimkeha kujuline mannekeenile puutetundlike &amp;quot;naha&amp;quot; välja arendamine.&lt;br /&gt;
Projektis on vaja uurida ja testida erinevaid sensoreid, Leida olulised mõõtevahemikud ja mõõtetäpsused vastavalt vajadusele rakenduses. Projekti käigus tuleb  koostööd teha mitmete põnevate inimestega, kes on aktiivsed moe- ja rõivatööstuse vallas.&lt;br /&gt;
Sobib mitmeteks bakalaureuse ja magistritöödeks, sõltuvalt tasemest on ka töö maht erinev.&lt;br /&gt;
&lt;br /&gt;
= '''Õppetööga seotud''' =&lt;br /&gt;
==Sensori-anduri töö uurimine ja juhendmaterjali koostamine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on eksperimentaalselt parametriseerida robootikas/automaatikas kasutatav sensor/täitur ning tulemuse põhjal koostada protokoll ja metoodika selle kasutamiseks.&lt;br /&gt;
&lt;br /&gt;
== Juhendmaterjali koostamine koolirobootika tarbeks==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on koostada õpetajatele juhendmaterjale ja põnevaid tööülesandeid, aga samuti ülesandeid, mis aitavad lastel õppida füüsikat, matemaatikat, keemiat ja bioloogiat.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Üldine info bakalaureuse- ja magistritöö tegijatele=&lt;br /&gt;
&lt;br /&gt;
Teil on kaks juhendajat. Eeldame, et te vähemalt kord nädalas võtate vähemalt ühe juhendajaga kontakti ja arutate läbi oma mured ja tegemised.&lt;br /&gt;
&lt;br /&gt;
Töö esimene versioon peab olema esitatud hiljemalt 1. maiks. Hilinemiseks sobivad ainult dokumentaalselt tõestatavad meditsiinilised põhjused. Esimene version peab sisaldama:&lt;br /&gt;
# sissejuhatust, mis räägib, miks projekti tulemus on vajalik ja mida teised selles valdkonnas maailmas teinud on;&lt;br /&gt;
# projekti teoreetilisi/matemaatilisi/mudeli aluseid lahti kirjutatuna;&lt;br /&gt;
# tehtud tegevuse detailset kirjeldust (detaile pole kunagi liiga palju, delete on lihtsaim funktsioon, mida juhendaja teie kirjaliku töö ümber kirjutamisel :) teha saab);&lt;br /&gt;
# töö tulemusi, st kas mõõtmistulemusi või seadme töötava! prototüübi tehniline kirjeldust ja seadet ennast;&lt;br /&gt;
# hinnangut oma tööle, st töö tulemuste edasise arengu analüüsi, tulemuste analüüsi ja hinnangut töö tulemuse kvaliteedile.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
[[vanad teemad]]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Student_projects&amp;diff=11329</id>
		<title>Student projects</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Student_projects&amp;diff=11329"/>
		<updated>2013-11-25T20:05:27Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:IMS poster.png|300px|right]]&lt;br /&gt;
&lt;br /&gt;
''Siin on mõned tegemised, mide meie uurimisgrupi juures on võimalik teha. Tegemist pole lõpliku nimekirjaga ning head tegijad on alati oodatud huvitavate ideedega. Kõikidest teemadest on võimalik edasi minna kuni PhD kaitsmiseni.''&lt;br /&gt;
''Tegijad, kes teevad oma töö hindele A, saavad ka väärilise töötasu.'' ''Huvi korral [[User:Alvo#Contacts|võta ühendust]]''. Mõnede teemade kirjeldused on inglise keeles. Nende teemade juhendajateks on külalisteadlased ja -õppejõud.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- This is a comment ---&amp;gt;&lt;br /&gt;
= '''Eksperimentaalne materjaliteadus''' =&lt;br /&gt;
== Süsinikelektroodidega polümeersed täiturid==&lt;br /&gt;
&lt;br /&gt;
Kunstlihaseid ehk elektroaktiivseid polümeerseid komposiitmaterjale on väga palju erinevaid. Meie tegeleme madalpingel töötavate ioonsete materjalidega, millel on mitmed eelised kasutamiseks mikroseadmetes ja meditsiinis. Hetkel on uurimisel kaks suunda. Esimese eesmärk on arendada kunstlihasetes kasutatavaid ioonvedelik-süsinik-polümeer komposiite, kasutades selleks erinevaid süsinikke— süsinikaerogeeli, karbiidset süsinikku, süsiniknanotorusid jpt. Teine suund keskendub uute kunstlihase valmistamise tehnoloogiate rakendamisele. Töö eesmärgiks on valmistada erinevad aktuaator-sensormaterjalid, uurida nende valmistamise võimalusi ja nende omadusi.&lt;br /&gt;
&lt;br /&gt;
==Kunstlihased kosmoserakendustes==&lt;br /&gt;
&lt;br /&gt;
Meie poolt valmistatavad materjalid on kerged ning juhitavad madalate elektripingetega. Seetõttu pakuvad nad huvi kosmosetehnoloogia seadmete valmistajatele.&lt;br /&gt;
Töö eesmärgiks on uurida kiirguse, temperatuuri jpt kosmoses materjalidele mõjuvate kahjustavate toimete mõju.&lt;br /&gt;
&lt;br /&gt;
==Juhtivpolümeeridel põhinevate mitmekihiliste kunstlihaste valmistamine ja iseloomustamine==&lt;br /&gt;
&lt;br /&gt;
Kunstlihased, sensorid ja energiahõiveseadmed on elektritjuhtivate orgaaniliste polümeeride uudsemateks ja põnevamateks arengusuundadeks.  Neid loodetakse kasutada meditsiinis, robootikas,  kosmose- ja militaartööstuses.  Enne laiaulatuslikku kasutuselevõttu on siiski vaja veel teha hulk arendustööd.  Mitmekihilise disain loob eeldused juhtivpolümeerse materjali paremaks kontrollimiseks ning tema omaduste parandamiseks.  TÜ IMS laboris on välja töötatud uudsed sünteesimeetodid metallivabade kunstlihaste valmistamiseks.  Senistel lihtsa ühekihilise struktuuriga materjalidel on mitmeid puudusi (juhtuvuse langus, tundlikus väliskeskkonna mõjudele). Aktuatsiooni tekitavale polümeerikihile vastupidise ioonliikuvusega kihtide lisamine loob eelduse neid puudusi vältida.&lt;br /&gt;
&lt;br /&gt;
==Design of actuator performance ==&lt;br /&gt;
(EAP development, technology), Master or bachelor student&lt;br /&gt;
The focus of actuator research mainly based on actuator preparation in view of  new devices. Carbide derived carbon materials is applied as actuator and conductive material to deposit conducting polymer on it. The interaction between these materials are object of the actuator studies with main goal to optimize actuator strain and stress. Improvement of actuator devices over additional chemical modification (hydrophobic or hydrophilic coatings) are included in the new design of actuator performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Flexible autofocus fluid lens device development for application at invisible shirt technology==&lt;br /&gt;
IT, Technology andmaterial scientist, PhD student or 1 Master student &lt;br /&gt;
Autofocus fluid lens device based on a formed interface between oil and water forming a lens, which change their form (concave  or convex) under applied electric field. A new design based on conducting polymer actuators and modification thereof, changing the interface between oil and electrolyte over membrane actuation, which required less energy for application in portable devices (cell phone, laptops).  The device was constructed for testing the membrane actuator. To apply different EAP actuators on it and minimize the device, different work need to be done in installing electronic control and testing actuator membranes. Flexible fluid lens is the next step to integrate it in smart shirt technology obtaining invisible shirts (optical effect).&lt;br /&gt;
&lt;br /&gt;
==Scanning ionic conductance microscopy (SICM)==&lt;br /&gt;
1 PhD or 2 master/bachelor students, IT, physics, material science, technology&lt;br /&gt;
SICM is a new instrument to measure ion movement on surface of conductive material. Double micro pipettes filled with electrolyte connecting the conductive sample and current in nano and pico level are possible to obtain. We are looking for students to make the SICM instrument applicable for actuator measurements to obtain more information of charging/discharging mechanism. With implementing electrochemical method on SICM we also want to establish micro-polymerization of conducting polymers on new smart devices. The main part for IT students is to help us to get the SICM instrument in operation mode. For a student in material science, chemistry or physics we want to investigate in situ actuation modes of actuator sample to get more information how ions trnsport inside of the electro active polymers.&lt;br /&gt;
&lt;br /&gt;
==Nanobubble formation mini device construction==&lt;br /&gt;
Technology, Surface science, PhD or Master or Bachelor student&lt;br /&gt;
Nanobubbles (oxygen or ozone) in aqueous solution can be obtained over bursting of micro-bubbles and the goal of this project is to obtain such device which can be deducted between a water stream. The application of this device is focus on cleaning purpose of solar cells which is still a not solved topic in the market. The harvest of energy in solar cells decrease after 5 years in the range between 20-40 percent of unclean solar panels (dust and dirt). To find a simple not environmental damaging method is one of the reason applying just water and air in Nanobubble formation and cleaning functionality. The project focus on future collaboration with solar-companies, cleaning and fumigations purpose.&lt;br /&gt;
&lt;br /&gt;
==Süsinikelektroodidega täiturmaterjali tööstusliku tootmise ettevalmistamine==&lt;br /&gt;
&lt;br /&gt;
Projekti sisuks on välja töötada materjal ja metoodika kuidas valmistada süsinikelektroodidega täitureid tööstuslikke protsesse kasutades. Töö laiem eesmärk on selliste materjalide masstootmine. &lt;br /&gt;
&lt;br /&gt;
= '''Arvutieksperimendid ja materjalide simuleerimine''' =&lt;br /&gt;
&lt;br /&gt;
==Materjalidefektide simuleerimine kõrgsageduslikes elektriväljades==&lt;br /&gt;
[[Image:Reklaamposter.png|right|thumb|400px]]&lt;br /&gt;
Nutikas tudeng, kes sa tunned huvi tänapäeva tippteaduse vastu ning soovid oma lõputööd teha CERN-iga seotud teemal ning tegutsedes CERN-is! Võta ühendust ning osale uue CERN-is baseeruva kiirendi väljatöötamisel! '''(Doktoritöö võimalus!)'''&lt;br /&gt;
&lt;br /&gt;
Kompaktne lineaarpõrguti (CLIC) on CERN-is arendatav uue põlvkonna lineaarkiirendi, kus osakeste kiirendamine toimub sirgjoonelistel trajektooridel. Planeeritav seade on 50 km pikk ning sellega jõutakse  energiateni 0.5 TeV - 5 TeV. Saavutamaks sellist energiat, kasutatakse kiirendavat elektrivälja, mis ulatub 100-150 MV/m. Sellistes kõrgetes elektriväljades avaldub olulise probleemina aga sage elektriliste läbilöökide tekkimine kiirendi elektroodidel. &lt;br /&gt;
&lt;br /&gt;
Läbilöögid avalduvad vaakumkaartena (kaarlahendus vaakumis), ning  üldiselt eeldatakse, et vaakumkaar algab elektrivälja võimendavate nanoskaalas olevatelt nõelasarnastelt pinnadefektidelt, nende pinnadefektide tekkemehhanism on ebaselge. '''Elektriliste läbilöökide kahandamine alla kriitilise piiri on keskse tähtsusega probleemiks CLIC-i ehitamisel!'''&lt;br /&gt;
 &lt;br /&gt;
Üks lubavamaid meetodeid kiirendi struktuuri parandamiseks on uute materjalide leidmine, mis suudavad taluda kõrgeid elektrivälju ning kiireid elektriväljade muutusi. Võtmeprobleemiks uute materjalide leidmisel on arusaamine füüsikalistest protsessidest, mis toimuvad materjalis läbilöögi eel ning ajal. Uurimustöös kasutatakse erinevaid arvutusmeetodeid, nagu '''molekulaardünaamika, lõplike elementide meetod ja kineetiline Monte-Carlo''', selgitamaks elektriliste läbilöökideni viivate pinnadefektide tekkepõhjuseid. Töös vajalike aruvutisimulatsioonide läbiviimine tähendab, et suures plaanis kasutatakse nn. '''„multiscale“ simulatsioone''', millega kaetakse materjalide simuleerimine alates atomistlikust skaalast kuni makroskaalani.  &lt;br /&gt;
&lt;br /&gt;
Lineaarkiirendi rakendusvaldkondadeks on näiteks  standardmudeli järgne füüsika (physics beyond the standard model), Higgsi bosoni täppismõõtmised ning meditsiinilised valdkonnad, nagu näiteks vähiravi.&lt;br /&gt;
&lt;br /&gt;
==Kunstlihaste materjalide uurimine erinevate arvutisimulatsioonimeetodite abil==&lt;br /&gt;
&lt;br /&gt;
* Tegemist on materjaliga, mida välise elektriväljaga on võimalik panna kuju muutma: painduma, punduma, kokku tõmbuma - nagu teeb reaalne lihas&lt;br /&gt;
* kunstlihase materjal võib ka reageerida välisele mehaanilisele kujumuutusele elektrilise signaaliga&lt;br /&gt;
* kunstlihas tegutseb hääletult, olles ise mõõtmetelt väga väike&lt;br /&gt;
* kunstlihase materjalidena uuritakse selliseid &amp;quot;hitte&amp;quot; nagu grafeen ja ioonvedelik&lt;br /&gt;
* arvutisimulatsioonid viivad sind materjali &amp;quot;sisse&amp;quot;, võimaldades näha seda, mis katses jääb varju, anda infot toimuvate protsesside kohta ja näpunäiteid materjalide parendamiseks&lt;br /&gt;
* tahad teda, kuidas liigutab 2 cm pikkune riba kunstlihast? võta lõplike lementide meetod ja sa näed ära pinged ja deformatsioonid kujumuutmisel&lt;br /&gt;
* tahad teada, kuidas elektroodide kuju muutmine mõjutab liitiumioonaku mahtuvust - seda, kui kaua sinu elektriauto mööda Tartu-Tallinna maanteed suudaks kihutada? võta lõplike elementide meetod ja sa saad välja arvutada aku tühjenemise kiiruse sinu elektriauto toitmisel&lt;br /&gt;
* tahad teada, kuidas liiguvad ja mõjutavad üksteist aatomid ja molekulid kunstlihases ja liitiumioonaku elektroodides ning elektrolüüdis? võta molekulaardünaamiline simulatsioon ja sa saad siseneda maailma, mis on 10000 korda väiksem sinu juuksekarva läbimõõdust&lt;br /&gt;
* tahad virtuaalselt istuda iga aatomi peal ja näha, kuidas ühe aatomi elektronpilv lööb teise oma segamini? võta kvantkeemiline molekulaardünaamika ja sinu sõit lainefunktsioonide harjadel on pöörasem kui Ristna neemel Katja ajal.&lt;br /&gt;
&lt;br /&gt;
==Liitium-ioon akude arhitektuuri optimeerimine arvutisimulatsioonide abil==&lt;br /&gt;
&lt;br /&gt;
Kaasaskantav mikroakutoide on oluliseks faktoriks paljudes arenevates tehnoloogiasuundades, kuna mikroelektroonika mõõtmete vähenemine on jätnud kaugele seljataha väikesemõõduliste vooluallikate arengu. Sobivate kaasaskantavate vooluallikate vähene energiamahtuvus on saamas takistuseks mitmete tehnoloogiasuundade nagu kaasaskantavate arvutusseadmete (Weareable Computing Technology e. WCT), mikroelektromehaaniliste seadmete (MEMS), biomeditsiiniliste mikromasinate arengus. Üheks võtmeprobleemiks selliste seadmete edukaks toimimiseks on nende varustamine vooluallikatega, mis ühelt küljelt tagavad seadme piisava energiahulgaga varustamise ning teiselt küljelt, on võimalikult väikesemõõduised ning kergekaalulised. Sellise konfiguratsiooni juures tulevad ilmsiks olemasolevate, olemuselt kahemõõtmeliste (2D) liitium-ioonakude puudused – nii väikeste ruum- ja pindalade puhul ei ole võimalik saavutada piisavaid energiatihedusi. Seda probleemi võimaldab lahendada 3D mikroakude (MB) kasutusele võtmine. &lt;br /&gt;
Liitiumioonakude arhitektuuri optimeerimise eesmärgiks on valmistada töötav 3D-MB, mille energiatihedus ning mahtuvus on vähemalt suurusjärgu võrra suuremad praegu kasutusel olevate akude omadest. Toimiva 3D-MB välja töötamiseks arendatakse ja uuritakse erinevaid mikroaku arhitektuure, neist sobiva väljavalimist ning optimeerimist lihtsustavad oluliselt teoreetilised, arvutisimulatsioonidega läbi viidavad uuringud, mis võimaldavad testida erinevaid 3D-MB arhitektuure, lahendada optimeerimisülesandeid elektroodide optimaalse geomeetria leidmiseks; optimeerida elektroodi pinda; uurida terve aku käitumist laadimisel-tühjakslaadimisel; optimeerida sobivaid mikroaku arhitektuure. &lt;br /&gt;
Meetodid makrotasandis, mida selliste uuringute läbiviimiseks kasutatakse on lõplike elementide meetod (LEM) ning mikrotasandil molekulaardünaamilise simulatsiooni meetod (MD). Simulatsioonide läbiviimiseks kasutatakse LEM-i puhul tarkvarapakette COMSOL Multiphysics ja Elmer ning MD puhul tarkvarapaketti dl_poly.&lt;br /&gt;
&lt;br /&gt;
= '''Aktuaatorid, seadmed ja nende juhtimine''' =&lt;br /&gt;
==IPMC elektromehhaanilisi omadusi uuriva seadme juhtimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on koostada eksperimentaalne seade, mis mõõdab elektroaktiivsete polümeeride elektromehaanilisi omadusi. Materjale kasutatakse kunstlihastena erinevates rakendustes. Töö tulemuseks peab valmima moodul, mis võimaldab seadet juhtuda USB kaudu.&lt;br /&gt;
&lt;br /&gt;
==IPMC täitureid kasutava autonoomse seadme konstrueerimine==&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on nn kunstlihaeid kasutavate materjalide abil liikuvate autonoomsete seadmete konstrueerimine ning töö kirjeldamine. Valik ideid: &amp;quot;putukas&amp;quot;, ratas, minipurilennuk, mikrohumanoid jne.&lt;br /&gt;
&lt;br /&gt;
==Süsinik-polümeermaterjalidest täiturite juhtimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on parametriseerida ning uurida materjaliteadlaste poolt laboris loodud uudsete materjalide elektromehaanilisi omadusi. St. vajalike elektromehaaniliste ja füüsikaliskeemiliste mudelite loomine, nende mudelite kirjeldamine ning eksperimentaalsete tulemuste vastu kinnitamine. Töö sobib (erinevates mahtudes) bakalaureus, magistri ja doktoritöödeks. Vajalik on võõrkeele oskus ning soov ja võimalus töötada aegajalt erinevates laborites välismaa ülikoolides.&lt;br /&gt;
&lt;br /&gt;
==IPMC/süsinik polümeermaterjalidest energiakogujate uurimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on parametriseerida ning uurida materjaliteadlaste poolt laboris loodud uudsete materjalide elektromehaanilisi omadusi eesmärgiga vinkeskkonnas olevate vibratsioonidest saadav energia muundada elektrienergiaks. Töö kujutab endast vajalike elektromehaaniliste ja füüsikaliskeemiliste mudelite loomist, nende mudelite kirjeldamine nin eksperimentaalsete tulemuste vastu kinnitamine. Töö sobib (erinevates mahtudes) bakalaureuse, magistri ja doktoritöödeks. Vajalik on võõrkeele oskus ning soov ja võimalus töötada aegajalt erinevates laborites välismaa ülikoolides.&lt;br /&gt;
&lt;br /&gt;
== Lahedad ideed kunstlihaste rakendamiseks==&lt;br /&gt;
=== Ilmekas uksekoputi ===&lt;br /&gt;
&lt;br /&gt;
Teha kunstlihastest ilmekas uksekoputi, vt. http://www.youtube.com/watch?feature=player_detailpage&amp;amp;v=-Kee7iyp_3U&amp;amp;list=TLC5Famb33RxE&lt;br /&gt;
&lt;br /&gt;
===Fokuseeritava läätsesüsteemi konstrueerimine ja prototüüpimine===&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on ehitada lihtne prototüüp, mis suudab vedeliku rõhuga manipuleerides muuta pehme läätse fooksukaugust. Aktiivseks elemendiks on süsinik-polümeer materjalist valmistatud täitur ehk nn kunstlihas.&lt;br /&gt;
&lt;br /&gt;
=== Tigu===&lt;br /&gt;
&lt;br /&gt;
Ehitada lihtne prototüüp, mis liigub teo põhimõttel.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= '''Signal and Image Processing''' ''(Signaali ja Pilditöötluse)'' =&lt;br /&gt;
&lt;br /&gt;
==Image Processing Apps for Andriod Devices==&lt;br /&gt;
&lt;br /&gt;
Converting the camera of your Android device to a cool semi-professional camera, by introducing various image processing tools and options, such as '''filtering''', '''illumination enhancement''', '''histogram representation for better shots''', and many more options. In line with this application, we will develop a '''photo slide player''' which will smartly choose and plays some musics which fits the photo.&lt;br /&gt;
&lt;br /&gt;
You should have ''good knowledge of programming Android devices'' before you start the project.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==TMS320C6713 DSP Board==&lt;br /&gt;
&lt;br /&gt;
The TMS320C6713 device composes the floating-point DSP generation in the TMS320C6000™ DSP platform. The C6713 device is based on the high-performance, advanced very-long-instruction-word (VLIW) architecture developed by Texas Instruments (TI), making this DSP an excellent choice for multichannel and multifunction applications. The C6713 has a rich peripheral set that includes two Multichannel Audio Serial Ports (McASPs), two Multichannel Buffered Serial Ports (McBSPs), two Inter-Integrated Circuit (I2C) buses, one dedicated General-Purpose Input/Output (GPIO) module, two general-purpose timers, a host-port interface (HPI), and a glueless external memory interface (EMIF) capable of interfacing to SDRAM, SBSRAM, and asynchronous peripherals. &lt;br /&gt;
&lt;br /&gt;
You can ''design'' your own highpass, lowpass, bandpass, and bandstop ''filter'', ''denoise'' your input voice signal, add or cancel ''echo'', as well as introduce positive and negative ''feedback'' to your voice signal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Iris Recognition System Using Different Colour Channel Statistics==&lt;br /&gt;
&lt;br /&gt;
The proposed project introduces a novel system for person identification using iris recognition. The system is fast, efficient and reliable, which uses the data acquired from the iris images for the identification purposes. The system uses not only the luminance but also the chrominance information of the iris images. Irises of different people contain distinctive patterns both in luminance and chrominance domains. The proposed system explores the pixel statistics of these distinctive patterns for high performance iris recognition. The system contains image acquisition devices such as cameras and necessary interface cards, as well as, a computer system that accommodates the developed algorithms for the recognition of persons based on their iris patterns. The proposed system differs in methodology and performance from the existing iris recognition systems which are mainly using the algorithm introduced by John Daugman. Most of the presented methods in the literature follow the approach of Daugman which consider only the luminance information of the iris images, however we propose to include the chrominance information acquired from different colour channels such as Hue and Saturation. The proposed system can be used for security systems including secure entrance into the important buildings such as banks, treasury and airports to increase the security. The system can easily be used at the border check points to speed up the unnecessary queues at the check points.&lt;br /&gt;
&lt;br /&gt;
= '''Robootika''' =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aldebaran Nao rakendamine==&lt;br /&gt;
&lt;br /&gt;
Aldebaran Nao on poolemeetrine humanoidrobot, kes on varustatud mitmete andurite ja mootoritega. Ülesandeks on Nao rakendamine erinevate vajalike ja huvitavate ülesannete täitmiseks: suhtlemine, jalgpalli mängimine jne. RM/Intel Atom baasil töötav miniarvuti.&lt;br /&gt;
&lt;br /&gt;
=== Robocup 2014 SPL ===&lt;br /&gt;
Team Philosopher is planning to participate in the upcoming Robocup in the standard platform league. RoboCup is an international robotics competition founded in 1997. The aim is to promote robotics and AI research, by offering a publicly appealing, but formidable challenge [http://en.wikipedia.org/wiki/RoboCup]. &lt;br /&gt;
SPL league consists of several competitions which include: 5 vs. 5 soccer, drop-in soccer and technical challenges.  &lt;br /&gt;
Members: Gholamreza Anbarjafari, Kristian Hunt, Roland Pihlakas, Viljar Puusepp, and Siim Schults.&lt;br /&gt;
Contact: siimsch@ut.ee&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Glasses for Blind People: An aumatic way of Interacting with Environment for Blind People ==&lt;br /&gt;
The Project consists of several stages and a research folower can focus on of the following modules.&lt;br /&gt;
'''Image Processing Modules:'''&lt;br /&gt;
In this section the captured image via camera will be analysed. Specialy the faces will be detected and extracted, if it is one the faces in the database it will be recognized. In paralel the facial expression of the person will be also extracted. This will help the blid people to find out whether the person who is infront of him/her is happy or sad. This wil increase the quality of communication. Also various image pre-processing such image illumination enhancement is required at this stage which can be proposed and employeed.&lt;br /&gt;
'''GPS Modules:'''&lt;br /&gt;
Using GPS and also possible wireless data communication with a server can help the blind citizens to locate their current location and by using mapping systems such as Google Map they can go to their destination.&lt;br /&gt;
'''Sensor Modules:'''&lt;br /&gt;
In order to speed up some recations such as avoiding obstacle, ultrasonic or optic sensors are being used in this project. The sensors are working independently from the visual systems as their only task is to help the blid person to walk. The project is open various new window in the research, e.g. new algorithms can be added to the computer vision section such as facial expression recognition so a commander voice can inform the blind person current expression of the person in front of him/her. The benefits that this project brings to the society are also significant as communication between blind people.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Õpperobotid==&lt;br /&gt;
&lt;br /&gt;
Eesmärgiks on arendada välja meelalahutuslike robootika teemalisi vahendeid lastele nii AHHAA teaduskeskuse kui nn robotiteatri tarbeks. Töö sisaldab seadme konstrueerimist,ning realiseerimist töötava prototüübi kujul. Konkreetseid ideid on mitmeid, kuid uued ideed on oodatud.&lt;br /&gt;
&lt;br /&gt;
= '''Partneritega seotud teemad''' =&lt;br /&gt;
&lt;br /&gt;
==Mehhanoelektriliste andurite uurimine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on eksperimentaalselt uurida erinevate mehaanilist liigutust elektriliseks muundavate materjalide omadusi. Töö hõlmab eksperimendi konstrueerimist ja arvutijuhitavate mõõtmiste teostamist LabVIEW keskkonnas. Sobib hästi arvutitehnika, füüsika ja materjaliteaduse tudengitele.&lt;br /&gt;
&lt;br /&gt;
==Robotmannekeen rõivatööstusele==&lt;br /&gt;
&lt;br /&gt;
Projekti eesmärgiks on arendada välja inimkeha kujuline mannekeeni alakeha rõivatööstusele kiirendamaks ning parendamaks disainerrõivaste väljatöötamist.&lt;br /&gt;
Projektis on vaja konstrueerida mehaanika sõlmed, realiseerida elektroonika juhtsõlmed, modeleerida ning luua algoritmid mannekeeni välispinna juhtimiseks ning arendada välja süsteemi kontroll tarkvara. Samuti on vaja arendada välja kasutajatarkvara. Projekti käigus tuleb  koostööd teha mitmete põnevate inimestega, kes on aktiivsed moe- ja rõivatööstuse vallas.&lt;br /&gt;
Sobib mitmeteks bakalaureuse ja magistritöödeks, sõltuvalt tasemest on ka töö maht erinev.&lt;br /&gt;
&lt;br /&gt;
Projektil on ka konkreetne rakendus vt [http://www.fits.me www.fits.me]&lt;br /&gt;
&lt;br /&gt;
==Puutetundlik sensor robotmannekeenile ==&lt;br /&gt;
&lt;br /&gt;
Projekti eesmärgiks on arendada  inimkeha kujuline mannekeenile puutetundlike &amp;quot;naha&amp;quot; välja arendamine.&lt;br /&gt;
Projektis on vaja uurida ja testida erinevaid sensoreid, Leida olulised mõõtevahemikud ja mõõtetäpsused vastavalt vajadusele rakenduses. Projekti käigus tuleb  koostööd teha mitmete põnevate inimestega, kes on aktiivsed moe- ja rõivatööstuse vallas.&lt;br /&gt;
Sobib mitmeteks bakalaureuse ja magistritöödeks, sõltuvalt tasemest on ka töö maht erinev.&lt;br /&gt;
&lt;br /&gt;
= '''Õppetööga seotud''' =&lt;br /&gt;
==Sensori-anduri töö uurimine ja juhendmaterjali koostamine==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on eksperimentaalselt parametriseerida robootikas/automaatikas kasutatav sensor/täitur ning tulemuse põhjal koostada protokoll ja metoodika selle kasutamiseks.&lt;br /&gt;
&lt;br /&gt;
== Juhendmaterjali koostamine koolirobootika tarbeks==&lt;br /&gt;
&lt;br /&gt;
Töö eesmärgiks on koostada õpetajatele juhendmaterjale ja põnevaid tööülesandeid, aga samuti ülesandeid, mis aitavad lastel õppida füüsikat, matemaatikat, keemiat ja bioloogiat.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Üldine info bakalaureuse- ja magistritöö tegijatele=&lt;br /&gt;
&lt;br /&gt;
Teil on kaks juhendajat. Eeldame, et te vähemalt kord nädalas võtate vähemalt ühe juhendajaga kontakti ja arutate läbi oma mured ja tegemised.&lt;br /&gt;
&lt;br /&gt;
Töö esimene versioon peab olema esitatud hiljemalt 1. maiks. Hilinemiseks sobivad ainult dokumentaalselt tõestatavad meditsiinilised põhjused. Esimene version peab sisaldama:&lt;br /&gt;
# sissejuhatust, mis räägib, miks projekti tulemus on vajalik ja mida teised selles valdkonnas maailmas teinud on;&lt;br /&gt;
# projekti teoreetilisi/matemaatilisi/mudeli aluseid lahti kirjutatuna;&lt;br /&gt;
# tehtud tegevuse detailset kirjeldust (detaile pole kunagi liiga palju, delete on lihtsaim funktsioon, mida juhendaja teie kirjaliku töö ümber kirjutamisel :) teha saab);&lt;br /&gt;
# töö tulemusi, st kas mõõtmistulemusi või seadme töötava! prototüübi tehniline kirjeldust ja seadet ennast;&lt;br /&gt;
# hinnangut oma tööle, st töö tulemuste edasise arengu analüüsi, tulemuste analüüsi ja hinnangut töö tulemuse kvaliteedile.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
[[vanad teemad]]&lt;br /&gt;
--&amp;gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10886</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10886"/>
		<updated>2013-08-15T17:52:02Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Getting-real-world-coordinates-from-image-frame|Getting real world coordinates from the image]]&lt;br /&gt;
*[[Nao_localization|Localization]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao | How to start with Nao?]]&lt;br /&gt;
&lt;br /&gt;
*[http://www.aldebaran-robotics.com/documentation/nao/upgrade.html Upgrade Nao - link to Aldebaran guide.]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics | How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module | How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python | How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===UTNAOtool misbehaviors===&lt;br /&gt;
Issues:&lt;br /&gt;
* Nao load up doesn't load everything correctly - have to make a Naoqi restart everything to be loaded correctly.&lt;br /&gt;
* Tool crashes on many uses. eg: Generate color table,&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10807</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10807"/>
		<updated>2013-05-31T18:37:00Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How to map 2D points to 3D? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need it? ==&lt;br /&gt;
Our goal was to convert coordinates on the image to real world coordinates e.g. we wanted to know object's position relative to robot's position. This is necessary to make a robot understand, where the objects are relative to his position. More generally speaking, this is also needed to figure out the absolute coordinates on the football field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
We used the pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is the main formula that describes how the objects are projected onto the screen.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To learn about this model, there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but focus more on the overall idea, problems we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to convert between two coordinate systems, we need to know the intrinsic parameters and extrinsic parameters of the camera. The former describes how any real world object is projected to camera’s light sensors. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. The latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3-dimensional world). To convert a 2-dimensional screenpoint into a 3-dimensional world point, we also need to make an extra assumption that all the objects that interest us are on a specific plane (e.g. on the floor where Z=0).&lt;br /&gt;
&lt;br /&gt;
== How did we find out the camera parameters and the pose? ==&lt;br /&gt;
We based our calibration system on the OpenCV implementation of an algorithm that gathers tries to match screen coordinates with known real-world coordinates and hence tries to estimate the required parameters by using many different observations . Because we only need to find those parameters once, we did not see a need for developing anything more complex for that. We found useful OpenCV functions specially designed for finding the camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in the inner workings of these algorithms, you can go and read the documentation or the source code.&lt;br /&gt;
After collecting all results in terms of these parameters, we were able to convert real world 3D-points onto the image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to reverse this equation.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D points to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this equation. We had problems with inverting matrices. We found that the OpenCV Mat::inv() didn’t give good enough results for most matrices – maybe because opencv's pseudo inverse was buggy or we were using it wrongly. We tried same numbers with octave's pinv and got pretty reasonable results, so we recommend to be cautious with pseudo inverting matrices in opencv.&lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO: A picture of previous formula altered.&lt;br /&gt;
&lt;br /&gt;
In the end, it turned out that it was really easy to solve the equation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we assume that Z=0 and (a)&amp;lt;sub&amp;gt;3,4&amp;lt;/sub&amp;gt; is the camera matrix multiplied by the rotation-translation matrix then we actually get the following system of equations.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inv_eq.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We don't know the value of s, X and Y. As we have three unknowns and thee equations, we can just solve the equation system and obtain our coordinates u and v. We implemented it using the [http://en.wikipedia.org/wiki/Cramers_rule Cramer's rule].&lt;br /&gt;
&lt;br /&gt;
== Changing plane rotation while robot is walking ==&lt;br /&gt;
We noticed some quite heavy tilting happening on the image while Nao was moving, in figures it was approximately +5...-5 degrees over the orthogonal axis relative to the image frame (TODO: something more convincing needed as a figure). Visually, it seemed a lot like this - have a [https://www.youtube.com/watch?v=9PlHgYVYTgQ look].&lt;br /&gt;
&lt;br /&gt;
The angle that Nao's torso is moving can be easily measured with [http://www.aldebaran-robotics.com/documentation/naoqi/core/almemory-api.html?highlight=memoryproxy#ALMemoryProxy::getData__ssCR AL::ALValue ALMemoryProxy::getData(&amp;quot;device&amp;quot;)]. Next we made an assumption that Nao's torso is rotating the same amount as it's head and made connection fixed between torso and head.&lt;br /&gt;
Having obtained the angle we had to make the camera's pose dependent on the robot's rotation. After some thinking we found that we had to modify the camera's rotation matrix according to Nao's torso rotation in a bit more complex way, as they don't have a linear correlation.&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10806</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10806"/>
		<updated>2013-05-31T18:36:32Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How to map 2D points to 3D? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need it? ==&lt;br /&gt;
Our goal was to convert coordinates on the image to real world coordinates e.g. we wanted to know object's position relative to robot's position. This is necessary to make a robot understand, where the objects are relative to his position. More generally speaking, this is also needed to figure out the absolute coordinates on the football field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
We used the pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is the main formula that describes how the objects are projected onto the screen.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To learn about this model, there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but focus more on the overall idea, problems we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to convert between two coordinate systems, we need to know the intrinsic parameters and extrinsic parameters of the camera. The former describes how any real world object is projected to camera’s light sensors. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. The latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3-dimensional world). To convert a 2-dimensional screenpoint into a 3-dimensional world point, we also need to make an extra assumption that all the objects that interest us are on a specific plane (e.g. on the floor where Z=0).&lt;br /&gt;
&lt;br /&gt;
== How did we find out the camera parameters and the pose? ==&lt;br /&gt;
We based our calibration system on the OpenCV implementation of an algorithm that gathers tries to match screen coordinates with known real-world coordinates and hence tries to estimate the required parameters by using many different observations . Because we only need to find those parameters once, we did not see a need for developing anything more complex for that. We found useful OpenCV functions specially designed for finding the camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in the inner workings of these algorithms, you can go and read the documentation or the source code.&lt;br /&gt;
After collecting all results in terms of these parameters, we were able to convert real world 3D-points onto the image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to reverse this equation.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D points to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this equation. We had problems with inverting matrices. We found that the OpenCV Mat::inv() didn’t give good enough results for most matrices – maybe because opencv's pseudo inverse was buggy or we were using it wrongly. We tried same numbers with octave's pinv and got pretty reasonable results, so we recommend to be cautious with pseudo inverting matrices with opencv.&lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&amp;lt;br&amp;gt;&lt;br /&gt;
TODO: A picture of previous formula altered.&lt;br /&gt;
&lt;br /&gt;
In the end, it turned out that it was really easy to solve the equation.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When we assume that Z=0 and (a)&amp;lt;sub&amp;gt;3,4&amp;lt;/sub&amp;gt; is the camera matrix multiplied by the rotation-translation matrix then we actually get the following system of equations.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inv_eq.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We don't know the value of s, X and Y. As we have three unknowns and thee equations, we can just solve the equation system and obtain our coordinates u and v. We implemented it using the [http://en.wikipedia.org/wiki/Cramers_rule Cramer's rule].&lt;br /&gt;
&lt;br /&gt;
== Changing plane rotation while robot is walking ==&lt;br /&gt;
We noticed some quite heavy tilting happening on the image while Nao was moving, in figures it was approximately +5...-5 degrees over the orthogonal axis relative to the image frame (TODO: something more convincing needed as a figure). Visually, it seemed a lot like this - have a [https://www.youtube.com/watch?v=9PlHgYVYTgQ look].&lt;br /&gt;
&lt;br /&gt;
The angle that Nao's torso is moving can be easily measured with [http://www.aldebaran-robotics.com/documentation/naoqi/core/almemory-api.html?highlight=memoryproxy#ALMemoryProxy::getData__ssCR AL::ALValue ALMemoryProxy::getData(&amp;quot;device&amp;quot;)]. Next we made an assumption that Nao's torso is rotating the same amount as it's head and made connection fixed between torso and head.&lt;br /&gt;
Having obtained the angle we had to make the camera's pose dependent on the robot's rotation. After some thinking we found that we had to modify the camera's rotation matrix according to Nao's torso rotation in a bit more complex way, as they don't have a linear correlation.&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10805</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10805"/>
		<updated>2013-05-31T17:03:21Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How To */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Getting-real-world-coordinates-from-image-frame|Getting real world coordinates from the image]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao | How to start with Nao?]]&lt;br /&gt;
&lt;br /&gt;
*[http://www.aldebaran-robotics.com/documentation/nao/upgrade.html Upgrade Nao - link to Aldebaran guide.]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics | How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module | How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python | How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10804</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10804"/>
		<updated>2013-05-31T17:03:07Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Getting-real-world-coordinates-from-image-frame|Getting real world coordinates from the image]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao | How to start with Nao?]]&lt;br /&gt;
&lt;br /&gt;
*[[http://www.aldebaran-robotics.com/documentation/nao/upgrade.html Upgrade Nao - link to Aldebaran guide.]]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics | How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module | How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python | How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10771</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10771"/>
		<updated>2013-05-22T17:36:24Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Our solutions to some important problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Getting-real-world-coordinates-from-image-frame|Getting real world coordinates (Nao world) from image frame]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10770</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10770"/>
		<updated>2013-05-22T17:35:44Z</updated>

		<summary type="html">&lt;p&gt;Siims: This page describes how we found solved mapping coordinates from 2D image plane to 3D real world soccer field plane.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
TODO: A picture of previous formula altered.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Changing plane rotation while robot is on the move ==&lt;br /&gt;
We noticed quite aggressive tilting happening one the image while Nao was moving, in figures it was approximately +5...-5 degrees over orthogonal axis relative to the image frame (TODO: something more convincing needed as a figure). Visually it seemed a lot - have a [https://www.youtube.com/watch?v=9PlHgYVYTgQ look].&lt;br /&gt;
&lt;br /&gt;
Angle what Nao's torso is moving can be easily measured with [http://www.aldebaran-robotics.com/documentation/naoqi/core/almemory-api.html?highlight=memoryproxy#ALMemoryProxy::getData__ssCR AL::ALValue ALMemoryProxy::getData(&amp;quot;device&amp;quot;)]. Next we made an assumption that Nao's torso is rotating the same amount as it's head and made connection fixed between torso and head.&lt;br /&gt;
Having obtained the angle we had to make camera pose dependent on the robot rotation. After some thinking and drawing we found that we had to update camera rotation matrix according to Nao's torso rotation.&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao_line_detection&amp;diff=10768</id>
		<title>Nao line detection</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao_line_detection&amp;diff=10768"/>
		<updated>2013-05-22T17:11:41Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
We need to detect the white lines on the field. The main reason is that they help the robot to localize itself on the field. Line detection, however. is computationally a quite complex task, so we have tried a few different approaches. There are implementations of [http://en.wikipedia.org/wiki/Hough_transform Hough Transform] in the OpenCV library, but they tend to be a bit too general-purpose and slow for our needs. We have also tried [http://en.wikipedia.org/wiki/RANSAC RANSAC], but it become too slow when the number of line points increased. Now we are using Randomized Hough Transform that we implemented by the description of &amp;lt;i&amp;gt;L. Xu, E. Oja, P. Kultanen &amp;quot;A new curve detection method: Randomized Hough transform&amp;quot;&amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Our implementation ==&lt;br /&gt;
The line detection algorithm takes a vector of point coordinates as input. We produce these points during the segmentation process in the following way. First we scan every n-th line and every n-th row (n is something like 5, 10 or 20) for places where we have a green and a white pixel next to each other. We only scan some of the image, because we actually don't need many points to form the lines. It is actually kind of a compromise between computer performance and detection accuracy.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First the line detection algorithm shuffles the points vector, so we get a uniform distribution. After that the algorithm starts taking points out of it by looping through the vector. That way we get random line points, because the distribution is uniform. We find the line equation parameters of every line formed by two points and add them to a list if they are unique, increase it's score by one if not unique. If we find a pair of line parameters that has a score of 3, we remove all the points from the vector that lie close to that line. If the number of such points is big enough, it is considered to be a new line. After that the algorithm proceeds until no line is found for last few cycles.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm also calculates line starting and ending points. It is done by hashing all points to a small array (size 40 currently) and then choosing the longest consistent sequence. It's starting and ending points are the starting and ending points of that hashed on the array's consistent sequence.&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10767</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10767"/>
		<updated>2013-05-22T17:06:24Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Our solutions to some important problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Getting-real-world-coordinates-from-image-frame|Getting real world coordinates (Nao world) from fixed image frame]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10766</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10766"/>
		<updated>2013-05-22T17:05:35Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Changing plane rotation while robot is on the move ==&lt;br /&gt;
We noticed quite aggressive tilting happening one the image while Nao was moving, in figures it was approximately +5...-5 degrees over orthogonal axis relative to the image frame (TODO: something more convincing needed as a figure). Visually it seemed a lot - have a [https://www.youtube.com/watch?v=9PlHgYVYTgQ look].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10763</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10763"/>
		<updated>2013-05-22T16:52:20Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Our solutions to some important problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Getting-real-world-coordinates-from-image-frame|Getting real world coordinates (Nao world) from fixed image frame]]&lt;br /&gt;
*[[Changing-plane-rotation-while-robot-is-on-the-move|Get world coordinates while Nao is on the move]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Pin_hole_camera_model&amp;diff=10762</id>
		<title>Pin hole camera model</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Pin_hole_camera_model&amp;diff=10762"/>
		<updated>2013-05-22T16:51:14Z</updated>

		<summary type="html">&lt;p&gt;Siims: Siims moved page Pin hole camera model to Getting-real-world-coordinates-from-image-frame: link name was confusing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Getting-real-world-coordinates-from-image-frame]]&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10761</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10761"/>
		<updated>2013-05-22T16:51:13Z</updated>

		<summary type="html">&lt;p&gt;Siims: Siims moved page Pin hole camera model to Getting-real-world-coordinates-from-image-frame: link name was confusing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10760</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10760"/>
		<updated>2013-05-22T16:49:31Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Pinhole camera model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10759</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10759"/>
		<updated>2013-05-22T16:49:17Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Pinhole camera model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10758</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10758"/>
		<updated>2013-05-22T16:48:39Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Pinhole camera model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
[[File:Pinhole_Camera_Model.png]]&lt;br /&gt;
&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=File:Pinhole_Camera_Model.png&amp;diff=10757</id>
		<title>File:Pinhole Camera Model.png</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=File:Pinhole_Camera_Model.png&amp;diff=10757"/>
		<updated>2013-05-22T16:47:39Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10756</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10756"/>
		<updated>2013-05-22T16:45:40Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Our solutions to some important problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Nao_line_detection|Line detection]]&lt;br /&gt;
*[[Pin_hole_camera_model|Getting real world coordinates (Nao world) from fixed image frame]]&lt;br /&gt;
*[[Changing-plane-rotation-while-robot-is-on-the-move|Get world coordinates while Nao is on the move]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
*[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
*[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
*[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
*[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10755</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10755"/>
		<updated>2013-05-22T16:42:56Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why do we need this part? ==&lt;br /&gt;
Our purpose was to convert items on the image to real world coordinates e.g. we wanted to know item placement relative to robot's placement. This is necessary to make robot understand where are objects relative to him, and if we are watching a bigger picture, this is necessary to make robot know where it is relative to the soccer field.&lt;br /&gt;
&lt;br /&gt;
== Pinhole camera model ==&lt;br /&gt;
For this, we used pinhole camera model.&lt;br /&gt;
I am not going to describe all the theory for that, here is what pinhole model does in one formula:&lt;br /&gt;
[[File:http://docs.opencv.org/_images/math/50a3464c85a412907d91fd8895108ff692eb8d08.png]]&lt;br /&gt;
&lt;br /&gt;
To learn about this model there are good enough resources available (start with [http://en.wikipedia.org/wiki/Pinhole_camera_model wiki] and [https://www.youtube.com/watch?v=uhP3jrxraMk udacity]), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
== How to map coordinates from 3D to 2D? ==&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get camera parameters and pose?&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code. &lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going.&lt;br /&gt;
&lt;br /&gt;
== How to map 2D point to 3D? ==&lt;br /&gt;
There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. We took the model Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;br /&gt;
&lt;br /&gt;
== Performance ==&lt;br /&gt;
TODO:&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao_segmentation&amp;diff=10750</id>
		<title>Nao segmentation</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao_segmentation&amp;diff=10750"/>
		<updated>2013-05-22T16:19:23Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
We need to use some criteria to detect real-world objects. The easiest method to distinguish different objects is to use their colour. As many objects on the football field are colour-coded, it is reasonable to use it, as it is also computationally very effective. It means that we claasify every pixel into a colour class and after this is done, we form the blobs of the similar pixels.&lt;br /&gt;
&lt;br /&gt;
== Pixel classification ==&lt;br /&gt;
We use a lookup table to classify pixels into colour classes. We have a special program that let's us easily construct new lookup tables and this method has worked well enough. The lookup table format is described [http://www.cs.cmu.edu/~jbruce/cmvision/papers/JBThesis00.pdf here] on pages 15-16.&lt;br /&gt;
&lt;br /&gt;
== Blob formation ==&lt;br /&gt;
After we have classified pixels, blobs are formed from pixels of the same colour class. Blob formation is done using the CMVision algorithms with slight modifications to work better with our own data structures. The algorithms are described in detail [http://www.cs.cmu.edu/~jbruce/cmvision/papers/JBThesis00.pdf here] on pages 16-19.&lt;br /&gt;
&lt;br /&gt;
== End result ==&lt;br /&gt;
Applying those techiques on a frame gives us an array that consists of blob lists. In every blob list we have blobs that belong to the same colour class. They are also sorted by their area, so that it's easy to find the largest blobs of some specific colour.&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10749</id>
		<title>Getting-real-world-coordinates-from-image-frame</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Getting-real-world-coordinates-from-image-frame&amp;diff=10749"/>
		<updated>2013-05-22T16:14:51Z</updated>

		<summary type="html">&lt;p&gt;Siims: Created page with &amp;quot;Our purpose was to convert items on the image e.g. image coordinates to real world coordinates e.g. item placement relative to camera placement. For this, we used pinhole came...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Our purpose was to convert items on the image e.g. image coordinates to real world coordinates e.g. item placement relative to camera placement. For this, we used pinhole camera model.&lt;br /&gt;
&lt;br /&gt;
I am not going to describe all the theory for that, there are good enough resources available (start with http://en.wikipedia.org/wiki/Pinhole_camera_model), but more focus on overall idea, troubles we had and tools we used.&lt;br /&gt;
&lt;br /&gt;
Overall idea…&lt;br /&gt;
To be able to transform between two coordinate systems it is needed to know camera intrinsic parameters and extrinsic parameters. Former describes how any real world object arrives to camera’s light sensor. It consists of parameters such as camera’s focal length, principal point and skew of the image axis. Latter gives information about camera’s pose in the observed environment (3 rotations and 3 translations as we live in a 3 dimensional world). To convert 2 dimensional point into 3 dimensional world we also need to make an extra assumption that objects that interest us are on a plane that we determine.&lt;br /&gt;
&lt;br /&gt;
How did we get all of these parameters?&lt;br /&gt;
&lt;br /&gt;
We based our calibration system on opencv implementations of finding all of these parameters. Didn’t see any need to make anything topnotch in terms of speed, because we will need to get all these parameters only once and we can use them .. forever. We found useful opencv functions specially designed for finding camera matrix (intrinsic parameters) and rotation-translation matrix (extrinsic parameters) (see [http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html documentation] for calibrateCamera(), findChessboardCorners(), drawChessboardCorners() and projectPoints() also [http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html tutorial] might be useful). If you are interested in algorithms what makes it work, go watch the documentation or source code.&lt;br /&gt;
&lt;br /&gt;
After collecting all winnings in terms of these parameters we were able to convert real world 3D points onto image plane. But as this wasn’t our goal (we wanted 2D -&amp;gt; 3D) we had to keep going. There was a bit of chaos and many “wasted” days in terms of reversing this operation. We had problems with inverting matrices. Opencv Mat::inv() didn’t give right results and some matrix pseudo inverse seemed to be not working – probably these matrices weren’t invertible. &lt;br /&gt;
&lt;br /&gt;
TODO: Dig deeper, what was the problem with not being able to invert those matrices.&lt;br /&gt;
&lt;br /&gt;
In the end we solved equations with [http://en.wikipedia.org/wiki/Cramer's_rule Cramer’s rule].&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10747</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10747"/>
		<updated>2013-05-22T16:02:11Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Our solutions to some important problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
*[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
*[[Pin_hole_camera_model|Getting real world coordinates (Nao world) from fixed image frame]]&lt;br /&gt;
*[[Creating_horizon_aligned_grid|Get world coordinates while Nao is on the move]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10746</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10746"/>
		<updated>2013-05-22T15:58:29Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Our solutions to some important problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Our solutions to some important problems ==&lt;br /&gt;
[[Nao_segmentation|Image segmentation and blob detection]]&lt;br /&gt;
[[Pin_hole_camera_model|Getting real world coordinates (Nao world) from fixed image frame]]&lt;br /&gt;
[[Creating_horizon_aligned_grid|Get world coordinates while Nao is on the move]]&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=File:MoveNaoWithKeyboard.zip&amp;diff=10602</id>
		<title>File:MoveNaoWithKeyboard.zip</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=File:MoveNaoWithKeyboard.zip&amp;diff=10602"/>
		<updated>2013-03-11T19:03:54Z</updated>

		<summary type="html">&lt;p&gt;Siims: Siims uploaded a new version of &amp;amp;quot;File:MoveNaoWithKeyboard.zip&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Set_up_Opencv&amp;diff=10583</id>
		<title>Set up Opencv</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Set_up_Opencv&amp;diff=10583"/>
		<updated>2013-03-02T22:26:43Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are many tutorials and I am not going to produce another one...&lt;br /&gt;
[http://www.ozbotz.org/opencv-installation/ Try this one for Linux Ubuntu 12.04 32bit OpenCV 2.4.2.]&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Set_up_Opencv&amp;diff=10582</id>
		<title>Set up Opencv</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Set_up_Opencv&amp;diff=10582"/>
		<updated>2013-03-02T22:25:43Z</updated>

		<summary type="html">&lt;p&gt;Siims: Created page with &amp;quot;There are many tutorials and I am not going to produce another one... [http://www.ozbotz.org/opencv-installation/ Try this one for OpenCV 2.4.2.]&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are many tutorials and I am not going to produce another one...&lt;br /&gt;
[http://www.ozbotz.org/opencv-installation/ Try this one for OpenCV 2.4.2.]&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10581</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10581"/>
		<updated>2013-03-02T22:24:07Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How To */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
[[Set_up_Opencv | How to set up OpenCV?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Coping_with_Linux&amp;diff=10580</id>
		<title>Coping with Linux</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Coping_with_Linux&amp;diff=10580"/>
		<updated>2013-03-02T20:44:14Z</updated>

		<summary type="html">&lt;p&gt;Siims: Created page with &amp;quot;Useful commands: &amp;lt;ul&amp;gt; &amp;lt;li&amp;gt;Remove a file: $ rm&amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Remove a directory: $ rm -r&amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Find files and delete them: $ find / -iname file/name/with/regexp -exec rm -f {} \;...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Useful commands:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove a file: $ rm&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Remove a directory: $ rm -r&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Find files and delete them: $ find / -iname file/name/with/regexp -exec rm -f {} \;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Find directories and delete them: $ find / -name file/name/with/regexp -exec rm -r -f {} \;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;To find some folder/file names: locate somename&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;When using many terminal windows and when changing some database info, it is useful to &amp;quot;import&amp;quot; changes to other windows: $ sudo updatedb&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Change file privileges: $ chmod 644 (other nr can be looked up from [http://en.wikipedia.org/wiki/Chmod wiki])&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10579</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10579"/>
		<updated>2013-03-02T20:35:35Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How To */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
[[Coping_with_Linux | How to cope with linux?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao_Keyboard_Control_Python&amp;diff=10563</id>
		<title>Nao Keyboard Control Python</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao_Keyboard_Control_Python&amp;diff=10563"/>
		<updated>2013-02-23T18:10:56Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Python + Pygame + Pynaoqi + Nao Python SDK = fun to walk around:)&lt;br /&gt;
&lt;br /&gt;
I had a practical purpose also;) ... to have some kind of remote control method for making realistic videos from Nao cameras.&lt;br /&gt;
[[File:MoveNaoWithKeyboard.zip]]&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=File:MoveNaoWithKeyboard.zip&amp;diff=10562</id>
		<title>File:MoveNaoWithKeyboard.zip</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=File:MoveNaoWithKeyboard.zip&amp;diff=10562"/>
		<updated>2013-02-23T18:10:17Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao_Keyboard_Control_Python&amp;diff=10561</id>
		<title>Nao Keyboard Control Python</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao_Keyboard_Control_Python&amp;diff=10561"/>
		<updated>2013-02-23T18:07:35Z</updated>

		<summary type="html">&lt;p&gt;Siims: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Python + Pygame + Pynaoqi + Nao Python SDK = fun to walk around:)&lt;br /&gt;
&lt;br /&gt;
I had a practical purpose also;) ... to have some kind of remote control method for making realistic videos from Nao cameras.&lt;br /&gt;
[[File:keyboardControl.py]]&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao_Keyboard_Control_Python&amp;diff=10560</id>
		<title>Nao Keyboard Control Python</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao_Keyboard_Control_Python&amp;diff=10560"/>
		<updated>2013-02-23T18:05:18Z</updated>

		<summary type="html">&lt;p&gt;Siims: Created page with &amp;quot;Python + Pygame + Pynaoqi = fun to walk around:)  I had a practical purpose also;) ... to have some kind of remote control method for making realistic videos from Nao cameras.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Python + Pygame + Pynaoqi = fun to walk around:)&lt;br /&gt;
&lt;br /&gt;
I had a practical purpose also;) ... to have some kind of remote control method for making realistic videos from Nao cameras.&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10559</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10559"/>
		<updated>2013-02-23T18:00:41Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How To */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Keyboard_Control_Python|How to use keyboard to walk Nao around?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;N.B&amp;lt;/b&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Start_learning_robotics&amp;diff=10509</id>
		<title>Start learning robotics</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Start_learning_robotics&amp;diff=10509"/>
		<updated>2013-02-06T09:35:23Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Useful sources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Motivation===&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Don't be daunted by the feeling you don't know anything or know a little - just start day by day.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Find other people to help you (letters to research groups etc.) - someone is willing to help you, if you are persistent enough.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Not enough motivated yet? Got to [http://www.youtube.com/watch?v=nNbj2G3GmAo youtube] and watch some amazing things what robots can do:) &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
===Useful sources===&lt;br /&gt;
[https://www.udacity.com/course/cs373 Udacity]&lt;br /&gt;
[https://www.coursera.org/category/cs-ai Coursera]&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Start_learning_robotics&amp;diff=10508</id>
		<title>Start learning robotics</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Start_learning_robotics&amp;diff=10508"/>
		<updated>2013-02-06T09:34:10Z</updated>

		<summary type="html">&lt;p&gt;Siims: Created page with &amp;quot;===Motivation=== &amp;lt;ul&amp;gt; &amp;lt;li&amp;gt;Don't be daunted by the feeling you don't know anything or know a little - just start day by day.&amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Find other people to help you (letters to ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Motivation===&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Don't be daunted by the feeling you don't know anything or know a little - just start day by day.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Find other people to help you (letters to research groups etc.) - someone is willing to help you, if you are persistent enough.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Not enough motivated yet? Got to [http://www.youtube.com/watch?v=nNbj2G3GmAo youtube] and watch some amazing things what robots can do:) &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
===Useful sources===&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10507</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10507"/>
		<updated>2013-02-06T09:18:22Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How To */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
[[Start_learning_robotics|How to start learning robotics?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===N.B.===&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10506</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10506"/>
		<updated>2013-02-06T09:16:36Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* What problems have we faced? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare the Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through making the Naos do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
== How To ==&lt;br /&gt;
Developing software for Nao is full of challanges. Here we have some tutorials/instructions that might help you get started with the basic things.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Setup_Nao|How to start?]]&lt;br /&gt;
&lt;br /&gt;
[[Nao_Module|How to build a module?]]&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
&lt;br /&gt;
===Version differences===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration. A good tutorial is [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is also useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===N.B.===&lt;br /&gt;
When updating Naos, don't forget updating the SDK, cross-toolchain, qibuild etc. If you do, you might encounter some errors that are really hard to interpret. One of such errors occoured when a module was built using the old version of Atom-cross-toolchain. When Naoqi tried loading the module, it gave:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;libboost_signals-mt-1_45.so.1.45.0: cannot open shared object file: No such file or directory&amp;lt;/code&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of art software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10466</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10466"/>
		<updated>2013-02-04T21:45:50Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* What problems have we faced? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through putting Naos to do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
==How to set up?==&lt;br /&gt;
We used to use C++ and Python API for software development (and also played around with [https://developer.aldebaran-robotics.com/resources/tutorial/nao-first-steps/ Choreographe]). Shortly on how we started:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Get the latest version from [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/], if you have robot with some kind of needed hardware configuration get it from your cd.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For SDK installations follow [http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide] for C++ and [http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide] for Python.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;And continue with tutorials and examples...&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Beginnings &amp;quot;Big Unknown&amp;quot;===&lt;br /&gt;
As you are reading this you are on the right direction:) We found also useful to read other users' tutorials especially wiki from [http://robotica.unileon.es/mediawiki/index.php/Nao_tutorial_1:_First_steps Robotics Group of the University of León].&lt;br /&gt;
===Versions===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration.&lt;br /&gt;
===Versions2===&lt;br /&gt;
Useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&lt;br /&gt;
===Updating Nao===&lt;br /&gt;
Good tutorial [http://www.aldebaran-robotics.com/documentation/nao/upgrade.html here].&lt;br /&gt;
===&amp;quot;Hidden&amp;quot; files in users.aldebaran-robotics.com===&lt;br /&gt;
Getting state of the are software for nao you can visit [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/]. Login. Go Software &amp;gt; Download &amp;gt; All Downloads &amp;gt; Navigate there (all software isn't on the front page!).&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10465</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10465"/>
		<updated>2013-02-04T21:34:58Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How to set up? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through putting Naos to do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
==How to set up?==&lt;br /&gt;
We used to use C++ and Python API for software development (and also played around with [https://developer.aldebaran-robotics.com/resources/tutorial/nao-first-steps/ Choreographe]). Shortly on how we started:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Get the latest version from [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/], if you have robot with some kind of needed hardware configuration get it from your cd.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For SDK installations follow [http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide] for C++ and [http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide] for Python.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;And continue with tutorials and examples...&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Versions===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration.&lt;br /&gt;
===Versions2===&lt;br /&gt;
Useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10463</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10463"/>
		<updated>2013-02-04T21:32:08Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* What problems have we faced? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through putting Naos to do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
==How to set up?==&lt;br /&gt;
We used to use C++ and Python API for software development. Shortly how we started.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Get the latest version from [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/], if you have robot with some kind of needed hardware configuration get it from your cd.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For SDK installations follow [http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide] for C++ and [http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide] for Python.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;And continue with tutorials and examples...&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;br /&gt;
===Versions===&lt;br /&gt;
We started with Naos having version 1.12 everything and started learning from the default reference of SDK 1.14. We noticed it but didn't think that it would make a lot of difference... Wrong! Aldebaran is very actively advancing it's software. We had many problems with modules not existing on Nao etc and overall have spent a lot of time on it. So update and work with the newest and the coolest modules, or go to [https://developer.aldebaran-robotics.com/doc/1-12/ref/index.html API 1.12 reference] to avoid some frustration.&lt;br /&gt;
===Versions2===&lt;br /&gt;
Useful to know that Aldebaran has made a list of function which have been changed [http://www.aldebaran-robotics.com/documentation/dev/cpp/tutos/porting_to_1.12.html].&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10460</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10460"/>
		<updated>2013-02-04T21:21:27Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How to set up? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through putting Naos to do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
==How to set up?==&lt;br /&gt;
We used to use C++ and Python API for software development. Shortly how we started.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Get the latest version from [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/], if you have robot with some kind of needed hardware configuration get it from your cd.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For SDK installations follow [http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide] for C++ and [http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide] for Python.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;And continue with tutorials and examples...&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10459</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10459"/>
		<updated>2013-02-04T21:18:14Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How to set up? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through putting Naos to do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
==How to set up?==&lt;br /&gt;
&lt;br /&gt;
We wanted to use C++ and Python API for software development.&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Get the latest version from [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/], if you have robot with some kind of needed hardware configuration get it from your cd.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For SDK installations follow [http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide] for C++ and [http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide] for Python.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=Nao&amp;diff=10458</id>
		<title>Nao</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=Nao&amp;diff=10458"/>
		<updated>2013-02-04T21:16:52Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* How to set up? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
&lt;br /&gt;
We are a team of people who are interested in robotics, software part of robotics to be more precise. After Robotex 2012 competition we wanted to do something cool and challenging, so we ended up playing around with 4 Nao robots who just waited to be found :)&lt;br /&gt;
&lt;br /&gt;
Our goal is to prepare Naos for [http://www.robocup.org/robocup-soccer/standard-platform/ Robocup Standard Platform] competition. But, whether or not we are able to succeed, we will make an effort to learn as much as possible through putting Naos to do something useful and pleasant.&lt;br /&gt;
&lt;br /&gt;
==How to set up?==&lt;br /&gt;
&lt;br /&gt;
We wanted to use C++ and Python API for software development.&lt;br /&gt;
1. Get the latest version from [http://users.aldebaran-robotics.com/ http://users.aldebaran-robotics.com/], if you have robot with some kind of needed hardware configuration get it from .&lt;br /&gt;
2. For SDK installations follow [http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide http://www.aldebaran-robotics.com/documentation/dev/cpp/install_guide.html#cpp-install-guide] for C++ and [http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide http://www.aldebaran-robotics.com/documentation/dev/python/install_guide.html#python-install-guide] for Python.&lt;br /&gt;
&lt;br /&gt;
==What problems have we faced?==&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
	<entry>
		<id>https://ims.ut.ee/index.php?title=User:Siims&amp;diff=10457</id>
		<title>User:Siims</title>
		<link rel="alternate" type="text/html" href="https://ims.ut.ee/index.php?title=User:Siims&amp;diff=10457"/>
		<updated>2013-02-04T20:55:01Z</updated>

		<summary type="html">&lt;p&gt;Siims: /* Siim Schults */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:profilepic.JPG|right|wrap|100px]]&lt;br /&gt;
== Siim Schults ==&lt;br /&gt;
&lt;br /&gt;
''I believe that an honest man can be successful. I believe that working hard gets me nearer where I want to go. I believe that people should work on things they are interested in. I believe that things can be done better. And I like robotics :D ''&lt;br /&gt;
&lt;br /&gt;
== Contacts ==&lt;br /&gt;
 &lt;br /&gt;
 '''Email:''' siimsch &amp;lt;nospam&amp;gt; ut.ee&lt;br /&gt;
 ''Skype:'' callto://siimschults&lt;br /&gt;
&lt;br /&gt;
== [http://www.linkedin.com/pub/siim-schults/30/160/881  CV] ==&lt;/div&gt;</summary>
		<author><name>Siims</name></author>
	</entry>
</feed>