Shrikar Archak

It does not matter how slow you go so long as you do not stop. ~Confucius

Voice of Internet : Twilio,ParseApp and Webflow Based Audition Platform

I have been playing with many cloud based platform and some platform really standout from the rest. With these new platforms the time taken to build an app has reduced drastically and its a lot easier to get started as well.

A few platforms which I used to build this app are

Voice Of Internet

Voice of internet is a platform where people can call a number and display their talent within 2 minutes. It can be either singing,playing instrument or whatever you think of. Once the content is PG rated you can vote by sending a sms to the phone number mentioned.

Give it a try here

Voice Of Internet

SmartCopy: Intelligent Layer on Top of Existing Cloud Storage

Simple feature matters

With so many options available in the cloud storage space I am sure everyone uses one or more of those cloud storages(Dropbox/Box/Google Drive etc). One key missing feature is to provide a simple way to exclude files from syncing.

Storage space is not free

Storage space is not free so it really matter what we sync to cloud. Nitpicking individual files to save space is not an easy option, so we tend to copy files which we dont need.

It was not just me who was facing the above mentioned problem there were similar feature request in dropbox forums , box , google drive forums. I wonder why these simple features we ignored anyway enough of nagging lets get to the good part.

Deciding what language or tools to use.

Languages I know : C, C++, Java, Scala, Python After working in C/C++ for a long time I knew managing binaries and shared libraries will be painful hence eliminated them.

Requirements:

  • Should support monitoring directory/file changes.All three languages java,scala and python qualify for this
  • Should be installed by default or installation should be bare minimum. Python is installed by default on all OS and hence a good candidate.
  • Should be unix based system with support for forking. (Thanks for the comment from Nei)

Python it is!!

Design

I followed a similar method to .gitignore and hence decided to have a list of all the pattern that need to be ignored from syncing

Example

  • .*.jar : Ignore all the files containing .jar
  • .class$ : Ignore all the files ending with .class
  • ^Bingo : Ignore all the files starting with Bingo

For more information on using regular expression please check the python regex documentation.

Components

  • ) smartcopyd : SmartCopy Daemon smartcopydaemon monitors for changes to a directory , filter the files according to the ignore patterns and sync’s to the cloud storage.

  • ) smartcopy : SmartCopy Client smartcopy allows you to change the config file and modify any ignore pattern rules.

Possible improvements/features

If you need a feature do tweet. Feature with more tweets or retweets wins and will be implemented next

Github repo : SmartCopy

Docker Nginx and Sentiment Engine on Steroids

Recipe for 74 Million request per day

In the blog post I will explain a battle tested setup which could let you scale http requests upto 860 req/s or a cummulative of 74Million requests per day.

Lets start with our requirements. We needed a low latency sentiment classification engine for serving literally millions of Social Mentions per day. Of late load against sentiment engine cluster has been increasing considerably after Viralheat’s pivot to serve Enterprise customers. The existing infrastructure was not able to handle the new load forcing us to have friday night out to fix it.

Setup

  • ) Nginx running on bare metal
  • ) Sentiment Engine powered by Torando Server in Docker instances. ( Docker version 0.7.5)

In a perfect world the default kernel setting should have worked for any kind of workload but in reality it wont work. The defaults kernel setting are not suitable for high load and are mainly for general purpose networking. In order to serve heavy short lived connections we need to modify/tune certain OS setting along with the tcp settings.

First increase the open file limit

Modify /etc/security/limits.conf to have a high number for open file descriptors. Since every open files takes some OS resources make sure you have sufficient memory don’t blindly increase the open file limits.

/etc/security/limits.conf
1
2
*               soft     nofile          100000
*               hard     nofile          100000

Sysctl Changes

Modify /etc/sysctl.conf to have these parameters.

/etc/sysctl.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
fs.file-max = 100000
net.ipv4.ip_local_port_range = 2000 65000
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = cubic

net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
  • net.ipv4.ip_local_port_range Nginx need to create two connection for every request, one to the client and the other one to the upstream server. So increasing the port range will prevent for port exhaustion.
  • net.ipv4.tcp_fin_timeout The minimum number of seconds that must elapse before a connection in TIME_WAIT state can be recycled. Lowering this value will mean allocations will be recycled faster
  • net.ipv4.tcp_tw_recycle Enables fast recycling of TIME_WAIT sockets. Use with caution and ONLY in internal network where network connectivity speeds are “faster”.
  • net.ipv4.tcp_tw_reuse This allows reusing sockets in TIME_WAIT state for new connections when it is safe from protocol viewpoint. Default value is 0 (disabled). It is generally a safer alternative to tcp_tw_recycle. Note: The tcp_tw_reuse setting is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as web servers. Reusing the sockets can be very effective in reducing server load.

Make sure you run sudo sysctl -p after making modifications to the sysctl.conf.

NGINX Configurations

nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
worker_processes  auto;
worker_rlimit_nofile 96000;
events {
  use epoll;
  worker_connections  10000;
  multi_accept on;
}

http {
        sendfile    on;
        tcp_nopush  on;
        tcp_nodelay on;

        reset_timedout_connection on;
}

upstream sentiment_server{
        server server0:9000;
        server server1:9001;
        server server2:9002;
        server server3:9003;
        server server4:9004;
        server server5:9005;
        server server6:9006;
        server server7:9007;
        server server8:9008;
        server server9:9009;
        server server10:9010;
        server server11:9011;
      keepalive 512;
}

server {
  server_name serverip;
  location / {

    proxy_pass http://senti_server;
    proxy_set_header   Connection "";
    proxy_http_version 1.1;
    break;
  }
}
  • worker_processes defines the number of worker processes that nginx should use when serving your website. The optimal value depends on many factors including (but not limited to) the number of CPU cores, the number of hard drives that store data, and load pattern. When in doubt, setting it to the number of available CPU cores would be a good start (the value “auto” will try to autodetect it).
  • worker_rlimit_nofile changes the limit on the maximum number of open files for worker processes. If this isn’t set, your OS will limit. Chances are your OS and nginx can handle more than ulimit -n will report, so we’ll set this high so nginx will never have an issue with “too many open files”.
  • worker_connections sets the maximum number of simultaneous connections that can be opened by a worker process. Since we bumped up worker_rlimit_nofile, we can safely set this pretty high.

References

Docker for sentiment engine.

Our sentiment engine runs inside a docker container which helps us in iterating and deploying new models fast. Our initial assumption was that running inside a docker would have performance overhead but it wasn’t. We tuned our container with similar configurations as the base machine. The sysctl.conf inside the container was almost similar to the host machine.

A good addition to the backend infrastructure would be some kind of a intelligent container which can look at the load and scale up or scale down the sentiment engine instances. This can be easily done as docker exposes a REST API to create and destroy the container on the fly. If you like interested with the work we do check our careers pageViralheat Careers

FYI Please do not copy paste these setting and assume it will work automatically. There are many variable like the server machine memory, cpu etc. This guide should be used to help you with tuning.

Daily Commute and Coursera Course Completion Relationship - My View

First part of my Story:

I commute daily from Santa Clara to San Mateo and have been doing this for almost 15months. Anyone who travels on Freeway 101 will agree with me that traffic sucks. There is no predictable way of finding out as to when 101 can be free. I tried starting at different time but still not able to find one time which works. If I am really really lucky then I reach office in 35mins, but 90% of the time it so happens that the commute is anywhere between 45 mins to 1:30mins (one way). I would say the average travel time is 1hr (one way). This travel comes with additional member which joins the party “STRESS”. Its quite common to see a few accidents daily on 101. I recently met with a terrible accident where a guy came and hit my car from behind. I believe these accidents are mainly caused by using mobile phones, but I can’t even blame them since the travel time itself is so bad that they need something to keep them occupied.

Second part of my Story:

I like to keep up with the current trends in technology and have been taking Coursera courses from the day one. Initally my office was near my house and I was able to complete the courses after going back home. After joining the new company I have noticed that my completion rate of the courses have gone down significantly.I tried completing Functional Programming in Scala last time but couldn’t . By the time I went home I was so tried that my enthusiasm for learning new stuff had decreased considerably. My usage of laptop was restricted to checking mails, monitor and fix issues with the production systems if any and exploring new stuff related to work.

Third part of my Story:

I always wanted to travel by public transport to avoid this traffic but there was one constraint that prevented me from doing that .The connecting links between VTA, Caltrain and the shuttles. If I didnt want to waste time waiting between the connecting links I should start from my home at 7:20 to 7:30am. Due to my recent accident my car is currently in body shop for repair. I guess this was the right time for me to try public transport. The travel time itself has not reduced for me but I could utilize that time since I am not driving anymore. I could listen to music, browse, watch videos, or program. I started watching Coursera video and completing the assignments during this time. The result have been awesome so far, I am in my last week of Functional programming in Scala and hopefully will complete it this time.

If you are taking Coursera Classes and have not been completing the course see if your pattern matches that of mine :).

PlayFramework SecureSocial and MongoDB

In this blog post we will cover on how to integrate SecureSocial into a PlayFramework Application. SecureSocial is an authentication module for Play Framework applications supporting OAuth, OAuth2, OpenID, Username/Password and custom authentication schemes. SecureSocial has an example where the tokens and users are all stored in memory , but to make it a bit more interesting we will store all the users into MongoDB using Play ReactiveMongo Plugin.

Lets get started.

Installation

Add the required dependencies to the projects/Build.scala

project/Build.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
object ApplicationBuild extends Build {

  val appName         = "crowdsource"
  val appVersion      = "1.0-SNAPSHOT"

  val appDependencies = Seq(
    // Add your project dependencies here,
    jdbc,
    anorm,
    "org.reactivemongo" %% "play2-reactivemongo" % "0.9",
    "securesocial" %% "securesocial" % "2.1.1"
  )


  val main = play.Project(appName, appVersion, appDependencies).settings(
    resolvers += Resolver.url("sbt-plugin-snapshots", url("http://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/"))(Resolver.ivyStylePatterns)
  )

}  

ReactiveMongo Configuration

Create a file in conf/play.plugins

conf/play.plugins
1
400:play.modules.reactivemongo.ReactiveMongoPlugin

MongoDB configuration in conf/application.conf

conf/application.conf
1
2
mongodb.servers = ["localhost:27017"]
mongodb.db = "crowdsource"

Secure Social Configuration

Modifying routes

SecureSocial relys on theses routes to be available for the application.

conf/routes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Login page
GET     /login                      securesocial.controllers.LoginPage.login
GET     /logout                     securesocial.controllers.LoginPage.logout

# User Registration and password handling (only needed if you are using UsernamePasswordProvider)
GET     /signup                     securesocial.controllers.Registration.startSignUp
POST    /signup                     securesocial.controllers.Registration.handleStartSignUp
GET     /signup/:token              securesocial.controllers.Registration.signUp(token)
POST    /signup/:token              securesocial.controllers.Registration.handleSignUp(token)
GET     /reset                      securesocial.controllers.Registration.startResetPassword
POST    /reset                      securesocial.controllers.Registration.handleStartResetPassword
GET     /reset/:token               securesocial.controllers.Registration.resetPassword(token)
POST    /reset/:token               securesocial.controllers.Registration.handleResetPassword(token)
GET     /password                   securesocial.controllers.PasswordChange.page
POST    /password                   securesocial.controllers.PasswordChange.handlePasswordChange


# Providers entry points
GET     /authenticate/:provider     securesocial.controllers.ProviderController.authenticate(provider)
POST    /authenticate/:provider     securesocial.controllers.ProviderController.authenticateByPost(provider)
GET     /not-authorized             securesocial.controllers.ProviderController.notAuthorized   

Append to the conf/play.plugins.

In this application we will see how to use the username and password based authentication provided by secure social hence we need to make sure those plugins are properly configured.

conf/play.plugins
1
2
3
4
5
6
7
8
9
400:play.modules.reactivemongo.ReactiveMongoPlugin
1500:com.typesafe.plugin.CommonsMailerPlugin
9994:securesocial.core.DefaultAuthenticatorStore
9995:securesocial.core.DefaultIdGenerator
9996:securesocial.core.providers.utils.DefaultPasswordValidator
9997:controllers.plugin.MyViews
9998:service.MongoUserService
9999:securesocial.core.providers.utils.BCryptPasswordHasher
10004:securesocial.core.providers.UsernamePasswordProvider 

For secure social to work we need to make sure we implement the UserService in our case the service.MongoUserService entry for 9998. This is the component which will store the user data, tokens in MongoDB and retrieve when required.

MongoUserService
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
package service

import _root_.java.util.Date
import securesocial.core._
import play.api.{Logger,Application}
import securesocial.core.providers.Token
import play.api.libs.json._
import play.api.libs.json.Reads._
import play.api.libs.json.Writes._
import securesocial.core.IdentityId
import securesocial.core.providers.Token
import play.modules.reactivemongo.MongoController
import play.api.mvc.Controller
import play.modules.reactivemongo.json.collection.JSONCollection
import scala.concurrent.Await
import scala.concurrent.duration._
import reactivemongo.core.commands.GetLastError
import scala.util.parsing.json.JSONObject
import org.joda.time.DateTime
import org.joda.time.format.{DateTimeFormatter, DateTimeFormat}

class MongoUserService(application: Application) extends UserServicePlugin(application) with Controller with MongoController{
  def collection: JSONCollection = db.collection[JSONCollection]("users")
  def tokens: JSONCollection = db.collection[JSONCollection]("tokens")
  val outPutUser = (__ \ "id").json.prune

  def retIdentity(json : JsObject) : Identity = {
    val userid = (json \ "userid").as[String]

    val provider = (json \ "provider").as[String]
    val firstname = (json \ "firstname").as[String]
    val lastname = (json \ "lastname").as[String]
    val email = (json \ "email").as[String]
    val avatar = (json \ "avatar").as[String]
    val hash = (json \ "password" \ "hasher").as[String]
    val password = ( json \ "password" \ "password").as[String]
    println("password : "+ password)
    val salt = (json \ "password" \ "salt").asOpt[String]
    val authmethod = ( json \ "authmethod").as[String]

    val identity : IdentityId = new IdentityId(userid,authmethod)
    val authMethod : AuthenticationMethod = new AuthenticationMethod(authmethod)
    val pwdInfo: PasswordInfo = new PasswordInfo(hash,password)
    val user : SocialUser = new SocialUser(identity,firstname,lastname,firstname,Some(email),Some(avatar),authMethod,None,None,Some(pwdInfo))
    user
  }

  def findByEmailAndProvider(email: String, providerId: String): Option[Identity] = {
    val cursor  = collection.find(Json.obj("userid"->email,"provider"->providerId)).cursor[JsObject]
    val futureuser = cursor.headOption.map{
      case Some(user) => user
      case None => false
    }
    val jobj = Await.result(futureuser, 5 seconds)
    jobj match {
      case x : Boolean => None
      case _  => Some(retIdentity(jobj.asInstanceOf[JsObject]))

    }
  }

  def save(user: Identity): Identity = {

    val email = user.email match {
      case Some(email) => email
      case _ => "N/A"
    }

    val avatar = user.avatarUrl match{
      case Some(url) => url
      case _ => "N/A"
    }

    val savejson = Json.obj(
      "userid" -> user.identityId.userId,
      "provider" -> user.identityId.providerId,
      "firstname" -> user.firstName,
      "lastname" -> user.lastName,
      "email" -> email,
      "avatar" -> avatar,
      "authmethod" -> user.authMethod.method,
      "password" -> Json.obj("hasher" -> user.passwordInfo.get.hasher, "password" -> user.passwordInfo.get.password, "salt" -> user.passwordInfo.get.salt),
      "created_at" -> Json.obj("$date" -> new Date()),
      "updated_at" -> Json.obj("$date" -> new Date())
    )
    println(Json.toJson(savejson))
    collection.insert(savejson)
    user
  }

  def find(id: IdentityId): Option[Identity] = {
   findByEmailAndProvider(id.userId,id.providerId)
  }

  def save(token: Token) {
    val tokentosave = Json.obj(
      "uuid" -> token.uuid,
      "email" -> token.email,
      "creation_time" -> Json.obj("$date" -> token.creationTime),
      "expiration_time" -> Json.obj("$date" -> token.expirationTime),
      "isSignUp" -> token.isSignUp
    )
    tokens.save(tokentosave)
  }



  def findToken(token: String): Option[Token] = {

     val cursor  = tokens.find(Json.obj("uuid"->token)).cursor[JsObject]
      val futureuser = cursor.headOption.map{
        case Some(user) => user
        case None => false
     }
      val jobj = Await.result(futureuser, 5 seconds)
      jobj match {
        case x : Boolean => None
        case obj:JsObject  =>{
          println(obj)
          val uuid = ( obj \ "uuid").as[String]
          val email = (obj \ "email").as[String]
          val created = (obj \ "creation_time" \ "$date").as[Long]
          val expire = (obj \ "expiration_time" \ "$date").as[Long]
          val signup = (obj \ "isSignUp").as[Boolean]
          val df = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss")
          Some(new Token(uuid,email,new DateTime(created),new DateTime(expire),signup))
        }
      }
  }

  def deleteToken(uuid: String) {}

  def deleteExpiredTokens() {}
}  

The above code implements the MongoControllers which provides with helpers to interact with MongoDB as Json Documents instead of BSONDocuments.

This would be our simple SecureSocial Application Controller where the index action need to be authenticated.

Application.scala
1
2
3
4
5
object Application extends Controller  with SecureSocial{
  def index = SecuredAction { implicit  request =>
    Ok(views.html.index(request.user))
  }
}

All the necessary code can be found on Github

Machine Learning Playground Using Docker

Recently I have seen a lot of interest by people to learn machine learning particularly machine learning in Python. Python has some of the awesome tools for getting started with Machine learning but the problem is getting all the necessary installation is painful. This is one of the reason people lose interest in getting started :).

If you guys are familiar with docker then I have created a image which has all the necessary packages installed

  • ) Numpy
  • ) Scipy
  • ) Scikit-learn
  • ) Matplotlib
  • ) Pandas.

All you need to do is docker pull shrikar/machinelearning

And if you dont know about docker go to Docker and get started. Since docker needs lxc features of Linux kernel it will be easier for you guys to just get a simple 5$ server DigitalOcean install docker and pull the image. Here is a tutorial on how to install docker on Digital Ocean Install docker on Digital Ocean

AngelList Funding

Identifying key investors in Angellist Funding.

Angellist exposes a well documented api to get information about the funding for each companies, their angellist investors and many other information about the venture capital funding on angellist. Lets look at the angellist company funding.

Getting Data.

  • Find the company investors.
  • Create a graph of follow/follower information between investors.

Betweenness

images

This measure tells us which people are most “between” other people. We can say that a person who is on the shortest path of connections between other people is between them. Another way of putting this is that if there is a set of connections between A and Z going through other people, then if Q is on a path which is the shortest path between A and Z then Q is said to be between A and Z. A person who is between a lot of other people has a higher betweenness centrality measure than a person who is not between many other people. Betweenness is useful because it potentially tells us which people are the key connectors of other people, or groups of people. In our example Thomas Korte thomask has the most betweenness centrality.


Authority and PageRank.

images

In this graph created we can see that Authority and PageRank for Mitch Kapoor to be high mkapor. PageRank and HITS are pioneering approaches that introduced link analysis ranking, in which they use the links pointing in to a node to define the important nodes. So its quite possible that the investment of Mitch Kapor in Angellist could have made other investors to invest in Angellist funding. If at all angellist had provided an api for finding out when a investor invested in the company we could have analysed if those investments were influenced by the the authoritative node(Mitch Kapoor).


Eigenvector Centrality.

images

I found this definition of Eigenvector centrality in IPL intelligence business. Eigenvector centrality is a little bit harder to describe easily, but is one of the most powerful techniques in the social network analysis toolkit. This measure takes into account not just the number of links that each person has (as in degree centrality), but also the number of links of the connected people, and their links too, and so on throughout the network. So if A is the key player in the group, with lots of connections to many other people, then a person B connected directly to A (but only to A) still has a lot of importance, even though B has only one connection. Person Z, out at the edge of the network might be connected to three people, but if those individuals are not of high importance themselves, then Z’s importance is similarly low. If we rank people by eigenvector centrality, we can see who the key important people are in the network. At the top of the list these may be obvious, but things can get more interesting as we see people who have a high eigenvector centrality even though they are not obviously important. Their appearance high up in the list gives us a clue that we may need to investigate further to determine why they are so high.


Finding the missing link.

The graphs clearly show that there exists some follow/follower relation between investors, but there was one outlier in our case Daniel Gould. It was surprising to see someone invest in a company who doesn’t have any link with other investors. After exploring more found out that Dave morin who is currently advising AngelList is connected to Daniel Gould.

Kiji on CDH4.2.1

In this post I will be talking about how to make kiji work with CDH4.2.1.

My assumption is that you have installed CDH-4.2.1 and the services like hadoop,hbase,zookeeper are running. I used cloudera manager for installing hadoop and all necessary components. Cloudera Manager is the best tool for managing a hadoop cluster

First you need to set the HADOOP_HOME,HBASE_HOME,HADOOP_CONF_DIR and HBASE_CONF_DIR. For installing done using cloudera manager the path would be something like /opt/cloudera/parcels …

  • Set these variable in the bashrc

    export HADOOP_HOME=/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop
    export HBASE_HOME=/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hbase
    export HADOOP_CONF_DIR=/etc/hadoop/conf
    export HBASE_CONF_DIR=/etc/hbase/conf
    

  • Download Kiji from Install Kiji and tar xzf kiji-bento-*.tar.gz. ( I downloaded it on the hbase server)

  • cd kiji-bento-albacore
  • pwd
  • Add kiji home folder to the PATH. Add the path you got from pwd to ~/.bashrc
     export PATH=$PATH:/home/shrikar/kiji-bento-albacore/bin
     
  • source ~/.bashrc
  • bin/kiji install
  • Comment out the line which tries to configure the cluster
     #source "${KIJI_HOME}/cluster/bin/bento-env.sh"
     
  • source bin/kiji-env.sh
  • At this point you can continue with the remaining steps from Quick_Start_Guide

FontAwesome With Meteorjs

Meteorjs

Meteorjs is a new Javascript framework for building realtime applications. More about meteor can be found here Meteorjs.One of the cool feature of Meteorjs is its package manager. Many open source libraries like twitter’s bootstrap are provided as a package. In our application we will be using bootstrap. There is a basic set of icons which are provided by twitter bootstrap but in this example I thought we will use font awesome. Font Awesome is a iconic font library designed for twitter bootstrap(Font Awesome).

Existing third party meteor packages didn’t work

There are two meteor packages which can be installed to integrate for font awesome into meteor app, but for some reason none of them worked for me.

  • bootstrap-fontawesome

/usr/local/lib/node_modules/meteorite/lib/sources/git.js:108
        throw "There was a problem cloning repo: " + self.url;
                                                   ^
There was a problem cloning repo: https://github.com/alexnotov/meteor-bootstrap-and-font-awesome
  • font-awesome

Errors prevented startup:
Exception while bundling application:
Error: The package named font-awesome does not exist.
    at _.extend.init_from_library (/usr/local/meteor/app/lib/packages.js:91:13)
    at Object.module.exports.get (/usr/local/meteor/app/lib/packages.js:225:11)
    at self.api.use (/usr/local/meteor/app/lib/bundler.js:94:28)
    at Array.forEach (native)
    at Function._.each._.forEach (/usr/local/meteor/lib/node_modules/underscore/underscore.js:79:11)
    at Object.self.api.use (/usr/local/meteor/app/lib/bundler.js:93:9)
    at _.extend.init_from_app_dir [as on_use_handler] (/usr/local/meteor/app/lib/packages.js:136:11)
    at _.extend.use (/usr/local/meteor/app/lib/bundler.js:382:11)
    at Object.exports.bundle (/usr/local/meteor/app/lib/bundler.js:707:12)
    at /usr/local/meteor/app/meteor/run.js:613:26
    at exports.inFiber (/usr/local/meteor/app/lib/fiber-helpers.js:22:12)
Your application is crashing. Waiting for file change.

“Necessity is the mother of all inventions.”

Structure of our Meteor Application

The default structure of a meteor app created is different from what we will using.

Things to be done:

  • meteor create awesomeapp
  • cd awesomeapp
  • meteor add bootstrap
  • mkdir -p public/img
  • mkdir -p css
  • mkdir -p client
  • mkdir -p server
  • mv awesomeapp.css css/
  • Download font awesome from Download here.
  • unzip the folder
  • move font folder to public/
  • move all css in the unzip folder/css to css/
  • discard all other downloaded content( Remove them from the root folder)

/RootFolder
     |
     |____ public
     |         |____ font
     |         |____ robots.txt
     |         |____ other static assets
     |____ css
     |      |____ awesomeapp.css
     |      |____ font-awesome.css ( all font awesome css files)
     |____ server
     |        |____ appserver.js ( Loaded only on the server side)
     |____ client
     |        |____ appclient.js ( Loaded only on the client side)
     |_ models.js (Loaded on both client and server)
          

Note : appserver.js, appclient.js and models.js are not created by default. If we have some custom logic which needs to be executed only in server or in client can go into those files.

Modifying the fontawesome.css

Since we have put font in the public directory of the meteor app we need to change the path in font-awesome*.css as below.


@font-face {
  font-family: 'FontAwesome';
  src: url('/font/fontawesome-webfont.eot?v=3.0.1');
  src: url('/font/fontawesome-webfont.eot?#iefix&v=3.0.1') format('embedded-opentype'),
    url('/font/fontawesome-webfont.woff?v=3.0.1') format('woff'),
    url('/font/fontawesome-webfont.ttf?v=3.0.1') format('truetype');
  font-weight: normal;
  font-style: normal;
}

You should be able to use any of the font awesome icons in your app now. Check this for integrating with your app code(Integration)

Easier Deployment/automation With Fabric

Easier deployment/automation with Fabric.

Fabric is a tool which has the flexibility to run commands on the remote machine including sudo commands. Most of the distribution doesn’t allow to execute sudo commands if they don’t have a tty associated. I was initially using Rye for ruby it was able to perform most of the work but had problems when executing sudo commands thats where Fabric shines.

Installation

sudo easy_install fabric

Usecase

Installing dependencies for Riack.


fabfile.py :

from fabric.api import run
from fabric.api import env
from fabric.api import sudo,cd,lcd
env.password = 'password'
env.user='username'
def riak_dep():
      print("Executing on %s as %s" % (env.host, env.user))
      sudo("sudo apt-get install git --yes")
      sudo("sudo apt-get install cmake --yes")
      with cd('downloads'):
        run('/usr/bin/wget http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.bz2')
        run('bzip2 -d protobuf-2.4.1.tar.bz2');
        run('tar -xvf protobuf-2.4.1.tar')
        with cd('protobuf-2.4.1'):
           print "Now configuring....";
           run('./configure')
           print "Now making ....";
           run('make')
           print "Running sudo install....";
           sudo('make install')

      print "Completed installing protobuf";

      with cd('~/downloads'):
        run('/usr/bin/wget http://protobuf-c.googlecode.com/files/protobuf-c-0.15.tar.gz');
        run('tar -zxvf protobuf-c-0.15.tar.gz')
        with cd('protobuf-c-0.15'):
          print "Now configuring....";
          run('./configure')
          print "Now making ....";
          run('export LD_LIBRARY_PATH=/usr/local/lib && make')
          print "Running sudo install....";
          sudo('make install');
          print "Completed installing protobuf-c";

      with cd('~/downloads'):
        print "Cloning the repository..";
        run('git clone https://github.com/trifork/riack.git')
        with cd('riack'):
          print "Running cmake ...";
          run('cmake src')
          print "Running make ...";
          run('make')
          print "Running make install...";
          sudo('make install');

      print("Installing dependencies complete on %s as %s" % (env.host, env.user))

Fabric in action

shrikar-dev$ fab -H "192.168.1.100" riak_dep