Bulletproof Junior media recognition

Several months ago I was involved in a viral campaign with various folks from the ad industry and it received plenty of media coverage. The campaign was focused on escalating the attention on the government’s negative record on gun control. After the Parkland high school shootings, the team felt motivated to bring into question the new realities of where the country is headed if we are passive about the current gun crisis. The message was to challenge parents of a possible new reality of having our children wearing bulletproof vests in school. I helped develop a website that emulates a company that manufactures children’s vest from kindergarten to high school. The intention was to drive as much traffic from social to this fake store while sending a message. The kicker is when the user clicks on buy now or other buttons, they are prompted to an interface where they can directly tweet a message to their local senator and demand action.

The way I constructed this website was fairly to the point. I had a short time frame and no budget. Time was very crucial because we wanted to launch this while the topic of discussion in the media was still relevant. I only had time to work on this after work and it needed to be done in a few nights. The most time spent was to find the right API for locating your local senator. The Google Civic API had good documentation so I went with that. I made a form where the user inputs their zip code. Then they receive a prompt to a tweet with the included twitter handlers from their local senators, the ready-made message, and the relevant hashtags.

Analytics was important to this project. I pushed for it even though it would cost me more hours of work, at night. It was very important to understand who was engaging on the website from what part of the country, at what time, what they clicked on, how long they spent on each page and so on. It was an opportunity to learn more in-depth Google Analytics and its tag manager. Learning Google tag manager was not a simple process. Initially, I would just insert the Javascript in the footer for each page and call it a night. But this time I wanted to be more ambitious. Even though the process is well documented, there’s still a lot of nooks and crannies to iron out. Navigating the dashboard was cumbersome. Most of the time I would write the tags to my buttons but never got a handshake/response from my dashboard. Eventually, they worked and it was amazing watching them work in real-time. I spent lots of time looking at the instant feedback of clicks and dwell times. After a days-worth of analytics, we made modifications to our site that made a difference in the long run.

Check out my name mentions from AdWeek, AdAge and Fast Company below:

https://adage.com/creativity/work/bulletproof-junior-vests-bulletproof-junior-vests/54177

https://www.adweek.com/agencies/agency-execs-create-fake-website-selling-bulletproof-kids-clothes-to-highlight-americas-gun-problem/

https://www.fastcompany.com/40547726/these-bulletproof-vests-for-kids-are-perfect-for-the-next-school-shooting

3D Printing Demystified

Before even learning about 3d printing I kinda had an idea how the process would be. Being familiar with 3d concepts, I knew there needed to be a 3d file format that needed to be interpreted by the printer. I was fortunate to be given access to an Ultimaker 2, maker bot.

Our client was coming over to the agency for breakfast. So our account team decided to impress by making a reusable stencil to make logos onto the tops of lattes. I created a prototype quickly making a 3d object from an illustrator file using Blender. I was simply extruding the points from the eps file and scaling a bit the thickness. After, I exported the file to a .obj file. Then I had to convert the obj into a readable format the Ultimaker accepts, and we’re in business. When using the Ultimaker app, Cura, the user interface visualizes how big in scale the object will be inside the printer. This helps the user have a precise idea how big the object will be.

I didn’t wing this without mistakes. Reading documentation and some trial and error sessions was the way I learned to demystify this really cool tool.


 

Museum Exhibit Prototype

There’s another industry that is always trying new ways to engage users. I’ve visited many museums around the world and sometimes exhibit curators make the effort on trying new interactive platforms. Some fail but some actually are successful. I visited a contemporary museum in Stockholm, Sweden where they had an Andy Warhol exhibit. They used tablets as mini-kiosks to better demonstrate Andy’s influence in pop music. A better example of using new technology in art exhibit was Bjork’s Digital exhibit in Los Angeles. Some developers collaborated with the artist/musician in making interactive pieces for her songs. It was more visual than anything else. Each song was represented differently and was unique. The UX was maybe a bit confusing but overall, I did navigate without a problem.

Doing this project reminded me how it can be an example of doing public exhibit pieces. Initially, this project was just an effort to do a 3D piece for the web. Then after a while, I added hand gestures to move the 3D object with your hands using Leap Motion. Finally, after contemplation, I added Socket.io simply for the purpose of having the capability for a user to use a VR headset. You add all of this up and you have the tools to make a digital exhibit. The main purpose was to make goggles/viewer/headset for visitors to wear and view closely a subject. In this case, I found a free 3D object that was a bear. The Natural History Museum and the La Brea Tar Pits first came to mind on where this could be applied. A visitor can simply put the headset on and use their hands to view a pre-historic mammoth that once roamed in today’s Los Angeles landscape.


Instructions:

1. Go to https://bearvr.herokuapp.com/#/fixed on desktop browser.

2. Go to https://bearvr.herokuapp.com/#/mirror on a mobile browser. Use a VR headset to insert your phone and set to vr mode.

3. Make sure you have your Leap Motion connected. You can purchase one on Amazon.

4.  Move your hands around the Leap and it will move the bear.

Screen Shot 2017-07-24 at 1.53.58 PM

 

Fun with Socket.io – Labyrinth 3D

Google’s Chrome experiments never fail to impress. Everytime Google Chrome’s team release a web experiment it always reminds me how the web will never disappear and how much potential it has. One of my favorite Chrome Experiment was the Arcade Fire site. At the time I never heard of the band, but the site rekindled my love of my career choice. The one site that started my love for the digital web was the Donnie Darko’s website developed by Hi-Res from the UK.

In 2013, Chrome released an experiment that included the use of a desktop browser and a mobile browser as a controller. It was one the first web experiments that used both browsers synchronously for a game. They may have used something similar to Socket.io if the two browsers talked to each other. Over the years I had the itch to try and make something similar. So I decided to do a project with the use of a mobile and desktop browser in a VR environment.

I knew I had to use Socket.io to have my Node server do requests and responses between the two browsers. I used Aframe VR to set up the 3D environment along with React as my front end library. In React I used states to update the Labyrinth’s x y z coordinates. Webkit provides a method to access a mobile device accelerometer coordinates. I used those coordinates as parameters in a function that sends it to the Node server and then dispatches it to the listener on the Labyrinth side. It works great, and I never get bored playing with it.


Instructions:

1. Go to http://labyrinth3d.herokuapp.com/#/floor on a desktop browser.

2. Go to http://labyrinth3d.herokuapp.com/#/controller on a mobile browser.

3. Move your mobile device side to side to control the Labyrinth.

 

The Coors Light experiment

I don’t like Coors Light beer BTW. Especially after seeing an article about most commercial, mass-produced beers are mainly corn based. Anyway, this is not an article about beer. The reason why there’s a Coors light bottle in this experiment is that I couldn’t find a free open source 3D model. So I used this Coors beer bottle for my prototype.

Mozilla’s Aframe web VR community is growing. The contributors are increasing and more and more is the VR library becoming robust. I became aware someone made an Aframe plug-in that integrates Leap Motion. Leap Motion is a hardware sensor that tracks hand and finger motions that require no hand contact or touching. Don McCurdy, a regular contributor to the Aframe community, wrote the plugin and is available on Github.

After installing the package to my VR project, I was impressed to a certain degree but not blown away. I played with the Leap in a non-web environment, and it feels smooth and responsive. On the web VR front, the Leap feels sufficient but lacks control on grabbing things. Why is Leap important to the developer community? With the Leap, you no longer need hand controllers, and VR apps become more immersive. Having it available on the web makes experiences even more accessible. Recently, JP Morgan Asset management announced $50 million in funding. Not bad coming from a project that started as a Kickstarter.


Instructions:

1. Go to http://rudes.de/coors/index.html

2. Make sure you have your Leap Motion connected. You can purchase one on Amazon.

3. Move your hands around the Leap and it will track your movements on the browser.

 

Gallery LA VR

There’s a crazy amount of independent art galleries in Downtown Los Angeles. Most of them are unsearchable in Google. When driving on the out skirts of Downtown Los Angeles, you’ll find galleries big and small scattered around the most unlikely places. Most are run by local artists and others by outside artists from other countries.  Whatever the origination, its always great to see the community grow and I wanted to contribute in some way. Also worth mentioning, I volunteer at Superchief gallery in DTLA every now and then, and I always try to consult on whatever digital challenges they may run into.

I decided to start a public data set/API for anyone to access. Anyone could also add new gallery info and help grow the API. The intention is to make it public for anyone to use as a resource in their own way. Anyone can use it for their apps, directories, data reference, and so on. And most importantly, it was an excellent opportunity for me to learn a backend framework or “stack” as most devs call it. I used Node.js as my server, Express.js for routing, Handlebars for my views and MongoDB for the database.

I made it a VR experience on the last minute. After reading a Google Dev blog post about releasing their framework, I became motivated in making some VR content. In the blog post, Google’s VR documentation was helpful in specifications and camera recommendations. I purchased the Richo Theta camera for around 300 bucks. Then I studied the framework and integrated it with my database. When adding a new gallery in the database, you have the option to add VR.


Instructions:

1. To view the VR database, go to https://galleryla.herokuapp.com

2. To log in using following credentials:

Username: rrudy90023

Password: rudy

*For optimal VR experience, view on a mobile device. Thanks to all the gallery owners for letting me shoot their space.

 

parrasch_4096.jpg

Viva Rodney!!

I made this stencil earlier this year. I eventually want to make into a t-shirt or something. It is the infamous Rodney on the Roq!

 

rodney.png

Dayton rims

I illustrated a Dayton wheel. There’s something unique about this car rim. Its very prevalent in LA car culture and they really stand out. I may make this into a t-shirt or something. Trademark pending. 😉

 

dayton2.png