Originally, I want to train my own model to do emotion recognition, but I realize it takes too long. Only 10mb data take me 3 hour to train, and not to mention I have 400mb data in total to finish the model.
So I turn to cmltrackr to do the emotion recognition.
To insert a emoji whenever user send a message, I have two work to do. First, I have to capture the html element that Facebook store our message in. Second, I have to make sure when users hit enter key, my insert event goes before the message is sent.
The second is quite complicate because large website like Facebook, they usually separate the the front-end and back-end. When I only insert the emoji to the .innerText, it will only change the view of user, which means the user “can” see there’s an emoji at the end of the message. However, as users hit enter, the emoji disappear at the end of the message, because we don’t really send the Facebook server the a emoji as it listen to our keyboard typing event which we have to simulate. So I have to:
1. Pause the Facebook eventListener which listen to enter key event
2. Remove this eventListener so it won’t block our enter key event simulate bu our code
2. Insert the emoji
3. Simulate the user hit enter key event to send out message
4. Add a new stopEnterKey eventListener so every time people type enter will trigger the insert action again
Now this chrome extension can only work partially because chrome browser do not give the extension the permission to access the web cam, so my next step may be write it as an Electron app to listen to laptops’ keyboard event.
It’s a shabby application of the smart rocket example. I think growing hair also follow certain kind of evaluation and selection to fit the environment(in this example is follow the arrow). For example, if you wear something on your head for a long time, it probably that the covered area will no longer grow hair and the hair of near area will grow to opposite direction to prevent the cover area.
P.S. most code is based on the original smart rocket example of NOC class.
As a follower of Shiffman Rainbowism, I want to build a clock which emit rainbow as a pointer to show the time. Due to I already have a 180 degree servo, I decide to build my clock which shows 12 hr in a half circle. Here is my sketch.
Then, I cut these parts with laser cutter and use screw and standoff to assemble them. I also buy a wooden frame to make whole things looks batter.
Next step is to create and adjust the rainbow beam. I bought a prism from amazon.(no prism at canal plastic) and test how to make a rainbow beam. Sadly, I found there must be some distance to let the light beam expand into rainbow. Also, the direction and the power of light influence a lot.
At last, I fail to assemble the prism onto the clock, because it’s too heavy for card board to stand.
1. When experiment with materials or tools, the time we need to reserve is 5 times, not 3.
2. After the whole semester, I’m more likely to know what kinds of materials I will use a lot and should keep in stock.
3. Stop imagine myself can perfectly plan and sketch. I should just start to make something and iteration. It’s also good to split one project into many parts of prototype to experiment.
Last but not least, Thanks you Ben for this semester 😀
For this week assignment, I build a foldable chair combine the material of aluminum and fabric.
The first thing is to prepare the frame of the chair. I cut the aluminum perfectly with the round saw and try different drill bit to see which one can fit best with my skew.
Unfortunately, I broke my first drill bit in my life. Congrats man New life record achieved, I found that dealing with metal is so hard that the measurement have to be very precise and the control of drill motion must be done with very stable setting. An inaccurate hole on the metal could ruin the whole piece because its impossible to fix the mistake like what we do on other material such as wood. The unstable control break a drill bit so easily fo apply on such a hard material.
After All the material are prepared, I assemble these bars into the frame of the chair and at last sew the fabric on the frame.
The idea of thisproject comes from our childhood memory- shadow puppet. Before we grown up and become more sophisticated to this world , we usually play with and connection meanings and objects with a more creative way. A stick can be a sword of a warrior. A shadow can be a monster hiding in the room. But most of us lose this ability of imagination. In this project, we want to waken people’s earlyhabit of imagination and revivify the shadow into our living environment.
Our reference is “The Treachery of Sanctuary” from Chris Milk’s. The installation art takes viewers through three stages of flight through the use of Kinect controllers and infrared sensors. Viewers could come closer to the screen and start waving their hands. The “shadow” of their arms are transformed into two giant wings and move as viewers waving their arms and walk closer toward or further from the screen.
In the original design, we want users can first create a real shadow with their bird gesture, which is the easiest shadow animal gesture. Then, the shadow can fly into the screen become a graphic bird which represent the imagination being vivified. At last, bird goes to the projection and live in the real environment.
We choose Kinect sensor, but leap motion or the combination flex sensor and accelerometer as our input sensor is owing to that it have a great range of sensing zone and intuitional experience. For the output, we use overlook viewpoint to mimic the shape of shadow. The bird image is draw by illustrator and animated by P5.js.
The first problem we faced is the position arrangement of the flashlight, projection and the Kinect. They influence each other and the movement of users.The second problem is the control of Kinect. We spent lots of times to erase the noise when Kinect detect more than one person. At last, we originally want to use three.js to show a 3D bird in the browser but fail.