tag:blogger.com,1999:blog-29580213770426003732023-11-15T22:48:50.131-08:00Nuby Ruby / Rails TalesLearning to use Ruby on Rails, or... the trials and tribulations (and jubilations) of a ruby nuby.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.comBlogger75125tag:blogger.com,1999:blog-2958021377042600373.post-62197811733451380592016-05-30T08:53:00.000-07:002016-05-30T08:55:56.264-07:00Ruby/Cucumber and vim in multi-project environments...I'm currently working as a Senior QA Automation Engineer for a major bricks-and-mortar retailer that also has a large online presence. They offer a mobile app, a mobile-website experience, and special support for tablet, Android, and iOS devices. Needless to say, there are a number of different top-level projects, each providing a similar but customized experience for the channel their customers use.<br />
<br />
QA automation (QAA) uses ruby/cucumber for automated testing, and the QAA environment provides the 'standard' cucumber structure with features/, features/step_defintions/ for feature files and step definitions. Each top-level project has their own sub-directories so, for example, the mobile-website project uses:<br />
<pre class="brush: bash">features/mobweb/
features/step_definitions/mobweb</pre>while the tablet project uses:<br />
<pre class="brush: bash">features/tablet/
features/step_definitions/tablet/</pre>I use vim/tmux for my IDE and depend on <a href="https://github.com/tpope/vim-cucumber" target="_blank">vim-cucumber</a> (thanks to Tim Pope for a great plugin) to find relevant step definitions from the feature files. The problem I ran into is that -- because of the multiple project sub-folders -- I was constantly getting the error: "Multiple matching steps found" and thus had to examine each alternative step individually to find the one that related to my project.<br />
<br />
This was seriously irritating (not to mention time-consuming) so this weekend I dug into the vim-cucumber code to see what could be done. It turned out to be pretty straight-forward (see my fork of vim-cumber <a href="https://github.com/JESii/vim-cucumber" target="_blank">here</a>; I've submitted a pull request and hope it gets merged).<br />
<br />
Tim's approach is to first find the top-level cucumber feature file directory (this is in the ftplugin/cucumber.vim file); that could be either features/ or stories/.<br />
<pre class="brush: bash">let b:cucumber_root = expand('%:p:h:s?.*[\/]\%(features\|stories\)\zs[\/].*??')</pre>That is then saved as a globbable name (e.g., features/**/*.rb):<br />
<pre class="brush: javascript">let b:cucumber_steps_glob = b:cucumber_root.'/**/*.rb'</pre>and later used to find all the relevant ruby files through the use of vim's glob statement:<br />
<pre class="brush: bash">for file in split(glob(b:cucumber_steps_glob),"\n")</pre><div>and then iterate through the list looking for the step you are searching for. Nice stuff.</div><br />
<div>Of course, with the multi-project directory structure we have, there are many 'duplicate' step definitions: we're all tapping on buttons or checking the status of similar things. Think <br />
<pre class="brush: ruby">When(/^I tap on the "(.*)" button$/) do...
Then(/^I should see the "(.*)" button is (enabled|disabled)$/) do...</pre>as simple examples).</div><br />
<div>I now have a working solution with a small footprint on the basic plugin. I looked at different ways of doing this, but then hit upon this solution. Export a glob spec to the shell environment variable CUKEFILES. When setting the glob spec, vim-cucumber looks for the existence of this variable and uses it instead of the default value. <br />
<pre class="brush: bash">if !exists("b:cucumber_steps_glob")
if empty($CUKEFILES)
echom "Using default definition for b:cucumber_steps_glob"
let b:cucumber_steps_glob = b:cucumber_root.'/**/*.rb'
else
echom 'Using CUKEFILES environment variable for b:cucumber_steps_glob'
let b:cucumber_steps_glob = $CUKEFILES
endif
endif
</pre>Once that's set, everything works just as it did before, except with a more precise set of files. Note the slight difference when using CUKEFILES: it assumes that the environment variable already has the /**/*.rb set, so that you could define multiple directories in your glob spec.</div><br />
<div>The CUKEFILES setting I'm using is:<br />
<pre class="brush: bash">export CUKEFILES=./features/step_definitions/mobweb/**/*.rb</pre>Now my multiple matches truly indicate duplicate step definitions that I need to examine and probably correct.</div><br />
<div>Finding a working glob statement was it's own challenge. I wanted to have the option of including more than one directory in the glob spec by using the "|" separator but that's another story</div><br />
JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-5375546487078046802016-05-22T19:47:00.000-07:002016-05-22T20:05:05.062-07:00ES6 highlighting and lint checking in vimI am a long-time "vimmie" (that is, I use vim for my IDE) and I love the plugin ecosystem which lets me do just about anything I want.<br />
<br />
For example, I recently started using ReactJS (wish I hadn't wasted to so much time trying to get Angular to work) and one thing I love about it is how straight-forward it is to setup and start writing TDD/BDD for development (but that's another story). My configuration uses Babel and es6, so I wanted to get syntax checking and indentation/highlighting working right off the bat.<br />
<br />
Here's how -- it's pretty easy. I'm using <a href="https://github.com/VundleVim/Vundle.vim" target="_blank">Vundle</a> as my plugin manager so that's the examples given. If you're using <a href="https://github.com/tpope/vim-pathogen" target="_blank">pathogen</a> or <a href="https://github.com/junegunn/vim-plug" target="_blank">vim-plug</a> as your plugin manager, it should be pretty easy to modify and get it running. I've used pathogen and it's excellent, but have no experience with vim-plug.<br />
<br />
<ol>
<li>If this is the first time you've used a plugin, go to the <a href="https://github.com/VundleVim/Vundle.vim" target="_blank">Vundle website</a> and follow the instructions there.</li>
<li>Install the vim-javascript plugin by pangloss: <a href="http://www.github.com/pangloss/vim-javascript" target="_blank">vim-pangloss</a>. This will give you basic syntax-highlighting / indentation using vim's built-in syntax support.</li>
<li>Install the syntastic plugin by scrooloose: <a href="https://github.com/scrooloose/syntastic" target="_blank">syntastic</a> (he's the author of the excellent <a href="https://github.com/scrooloose/nerdtree" target="_blank">NERDTree</a> plugin which gives you a hierarchical tree menu for system files). This will allow you to use external syntax checkers with vim, in addition to the built-in syntax checking already available. I use the <a href="https://www.npmjs.com/package/eslint" target="_blank">eslint</a> program for this. The options I use in vim for syntastic are:<br />
<pre class="brush: bash">""""""""""""""""""""" Syntastic """"""""""""""""""""""""
" From: http://usevim.com/2016/03/07/linting/
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_loc_list_height = 5
let g:syntastic_auto_loc_list = 0
let g:syntastic_check_on_open = 1
let g:syntastic_check_on_wq = 1
let g:syntastic_javascript_checkers = ['eslint']
let g:syntastic_error_symbol = '❌'
let g:syntastic_style_error_symbol = '⁉️'
let g:syntastic_warning_symbol = '⚠️'
let g:syntastic_style_warning_symbol = '💩'
highlight link SyntasticErrorSign SignColumn
highlight link SyntasticWarningSign SignColumn
highlight link SyntasticStyleErrorSign SignColumn
highlight link SyntasticStyleWarningSign SignColumn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""
</pre>
</li>
<li>Install the eslint program. This is a node module installed with the 'global' option so that it's available everywhere as an executable. This is just<br />
<pre class="brush: bash">npm install -g eslint
</pre>
</li>
<li>You would think this would work (I sure did) but it's not yet fully baked. With this setup, the first import statement in some of your .js files will be flagged as an error. To solve this issue, you have to give eslint a few parameters, in the file ~/.eslintrc.json<br />
<pre class="brush: javascript">// From: http://eslint.org/docs/user-guide/configuring
{
"parserOptions": {
"ecmaVersion": 6,
"sourceType": "module",
"ecmaFeatures": {
"jsx": true
}
},
"rules": {
"semi": 2
}
}
</pre>
</li>
</ol>
Now you've got full es6 syntax checking and error notifications.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-67375214219525891632016-03-19T10:20:00.000-07:002016-03-20T09:42:50.669-07:00And I am so done with this, too...Before I get started, let me just say that I like a lot of things about AngularJS.<br />
<br />
The basic approach of using directives in the HTML is very appealing and seems natural; filters are easy and intuitive; custom directives provide great options to adding function with little fanfare; data models fit right in to JavaScript; and the ease with which data flows between the view and the data model is pretty much seamless. Google being behind it in a big way is also a major plus.<br />
<br />
But what really excited me about AngularJS was that it had "Testability Built-in" [that's a quote from the AngularJS home page]. I'm a believer in TDD/BDD so that was a big selling point for me.<br />
<br />
However, my experience trying to get TDD actually working led me to a different conclusion. Here are some of my experiences.<br />
<br />
1. Creating a logger requires what? Logging is simple... right? In Ruby it's just:<br />
<pre class="brush: ruby">logger = Logger.new("my_log.log") # or STDOUT if you want to make it really simple
logger.error "This is my error"</pre><pre class="brush: ruby"> </pre>In the Javascript world, it's even simpler:<br />
<pre class="brush: javascript">console.log("This is my error")</pre><pre class="brush: javascript"> </pre>Sorry, Angular... you make it really ridiculous:<br />
<pre class="brush: javascript">describe('basic test', function(){
var log;
beforeEach(inject(function(_$log_){
log = _$log_;
}));
it('should just work', function(){
log.info('it worked!');
expect(log.info.logs).toContain(['it worked!']);
});
});
</pre>And notice the crazy little "trick" where you say _$log_ to get this to work. Yeah, it's "documented" but buried away and not exactly readily available.<br />
<br />
2. "injector already created. can not register a module"<br />
Hmmm... Angular knows that we want to register a module, they know that the injector has already been created, so why can't they Just Fix It? They do all kinds of other magic behind the scenes; this sounds like an easy one. If they can't Just Fix It, then how about a meaningful error message and documentation explaining why that really happens. This is one of those cases where you have to know the innards of Angular to be able to figure things out.<br />
See SO: <a href="http://stackoverflow.com/questions/24900067/injector-already-created-can-not-register-a-module">http://stackoverflow.com/questions/24900067/injector-already-created-can-not-register-a-module</a><br />
<br />
3. How many different ways are there to 'inject' stuff into your test.. (I sure can't count 'em). In fact, why do you even have to play that game at all? If you look around, you'll find countless ways to inject things...<br />
<br />
from: <a href="http://andyshora.com/unit-testing-best-practices-angularjs.html">http://andyshora.com/unit-testing-best-practices-angularjs.html</a><br />
<pre class="brush: javascript">describe("Unit Testing Examples", function() {
beforeEach(angular.mock.module('App'));
it('should have a LoginCtrl controller', function() {
expect(App.LoginCtrl).toBeDefined();
});
it('should have a working LoginService service', inject(['LoginService',
function(LoginService) {
expect(LoginService.isValidEmail).not.to.equal(null);
// test cases - testing for success
var validEmails = [
'test@test.com',
'test@test.co.uk',
'test734ltylytkliytkryety9ef@jb-fe.com'
];
// test cases - testing for failure
var invalidEmails = [
'test@testcom',
'test@ test.co.uk',
'ghgf@fe.com.co.',
'tes@t@test.com',
''
];
// you can loop through arrays of test cases like this
for (var i in validEmails) {
var valid = LoginService.isValidEmail(validEmails[i]);
expect(valid).toBeTruthy();
}
for (var i in invalidEmails) {
var valid = LoginService.isValidEmail(invalidEmails[i]);
expect(valid).toBeFalsy();
}
}])
);
});
</pre><br />
from: <a href="http://stackoverflow.com/questions/12758157/how-to-inject-dynamically-dependence-in-a-controller">http://stackoverflow.com/questions/12758157/how-to-inject-dynamically-dependence-in-a-controller</a><br />
<pre class="brush: shell">var algoController = function($scope, $injector) {
$scope.base64 = $injector.get('base64');
};
</pre>But then there's the 'dynamic' alternative<br />
<pre class="brush: shell">var algoController = function($scope, base64) {
$scope.base64 = base64;
};
</pre><br />
Or how about this, from: <a href="https://www.airpair.com/angularjs/posts/testing-angular-with-karma">https://www.airpair.com/angularjs/posts/testing-angular-with-karma</a><br />
<pre class="brush: shell">function MyController ($scope) {
$scope.property = 'value';
}
MyController.$inject = ['$scope'];
</pre><br />
But then there's this (which won't work with minified code)<br />
<pre class="brush: shell">function MyController ($scope) {
$scope.property = 'value';
}
angular.module('myApp', [])
.controller('MyController', [
'$scope',
myController
]);
</pre><br />
I'm sure that all these methods work just fine for the folks who documented them (nothing against them; I certainly respect what they can do). But trying to figure out a sane way to do things when you're getting started is... interesting.<br />
<br />
On <a href="http://stackoverflow.com/questions/31977313/tdd-with-angularjs-and-protractor.">http://stackoverflow.com/questions/31977313/tdd-with-angularjs-and-protractor.</a> <br />
A question I asked with no response...<br />
<br />
I could go on and on with the attempts I made to get a reliable, stable approach to injecting code for testing. In the end, I gave up. I'm sure that Angular supporters will scoff and say: "Hey! It's really easy... just do this".<br />
<br />
4. So I asked friends who were working with Angular for some help, I got comments commiserating with me about how it's all not very clear, confusing, overly-complicated, difficult, etc. Some folks even pointed to the fact that you have to know a lot about how Angular works internally before you can easily create real tests.<br />
<br />
However, the kicker for me was when I asked a member of the Angular core team about testing and his response was a shrug, a pained expression and the comment: "Yeah; we need to work on that." <br />
<br />
That's when I said: "Yeah; I need to move on to something else..."<br />
JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-50625975602873741262015-10-08T20:49:00.000-07:002016-03-19T07:03:38.879-07:00(Sadly) I am so done with this...Years ago when I first started working with Ruby on Rails (back in the version 1.0 days) I started off on a Windoze machine... and we all know what a major headache that was. It didn't take me too long to start looking for other alternatives and I settled eventually on Kubuntu.<br />
<br />
It took a lot of work to get things up and running but I slowly learned more and more about bash, and setting up Apache, and getting Samba to work on my mostly Windoze network. A lot of work but I eventually got things working and was able to happily develop my system on that platform. I really came to love the power and flexibility that was possible on Linux.<br />
<br />
Eventually, I started working with some older MacBooks and while they were relatively under-powered and had their quirks it was a pretty nice setup. And for the past two years now I've been working with current MacBooks and loving it. However, when starting up a new project, I decided to got back and see how Linux would work: it would be my desktop / base machine and I could still use the MacBook when I was on the road.<br />
<br />
As usual, there were a number of glitches, not the least of which was that my last version was 12.10 and was no longer supported. Not only does Ubuntu not support the old releases, they actually take down any servers that contain the files necessary for an upgrade so you're pretty much out of luck. I didn't want to start from scratch as I had a lot of things configured. I already had my /home on a separate hard drive so I was eventually able to migrate things over and get up and running on 14.04.<br />
<br />
I figured I was good to go... wrong.<br />
<br />
Running Rails 4.0 turned out to be a problem: capybara-webkit refused to install so I started researching that. More hassles, more delays, but I figured it would settle down soon.<br />
<br />
Wrong again.<br />
<br />
I decided to do a reboot and that's when things got really, really bad. I got dropped into a grub rescue prompt, with complaints about a parameter that some well-meaning application had added. And even setting the boot drive (after even more research) didn't solve the problem.<br />
<br />
Fortunately I don't have much on the system that isn't backed up -- there's only one small system and database that I'll need to recover but this time I'm going to do it with a live disk and then...<br />
<br />
...I am done with Linux. I've come to the opinion that Linux is really great for two kinds of people:<br />
<ol>
<li>People that want to learn and play and don't mind re-installing frequently.</li>
<li>People that really really know how to make this stuff work well -- sysadmins.</li>
</ol>
Unfortunately I'm not one of those two kinds: I want to use the system to get meaningful work done, I don't have time to learn everything, and I certainly don't qualify as a sysadmin, even though I've gotten lots of things working over the years.<br />
<br />
So... sayonara, Linux... it's been good to know you. You've been a good companion -- albeit often a frustrating one -- and I've learned a lot from you, but it's time for me to move on.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-69663739913169517032015-04-10T17:42:00.001-07:002015-04-22T16:00:31.969-07:00git shortcutsOver the past few months, I've developed a few shortcuts that make using git a little easier & quicker. Here are a few... I'm curious as to what others are doing. These are defined in my .bashrc<br />
<br />
<b>List or use the currently active branch</b><br />
<pre class="brush: shell">function gbc() {
if [ -e ./.git ]; then
git branch | sed -n -e "/^\*/ p " | cut -c 3-99
fi
}
export -f gbc
</pre>
Now if I want to see what the latest branch is, I simply type `gbc` and the current branch is displayed. What's even more useful, IMO, is that I can now use that function to display the current branch in my bash prompt using something like this:<br />
<pre class="brush: shell">PS1='${debian_chroot:+($debian_chroot)}\u@\h:[\W-$(gbc)]\w\$ '
</pre>
and now my prompt looks like this:<br />
<pre class="brush: shell">jseidel@EDP28:[s_analyzer-master]~/dev/coding/s_analyzer$
</pre>
I've got mine color-coded (MUCH longer PS1 string) so that the branch name really stands out and I can see immediately where I am.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuPU0aPloQQOF9gWYoI4AipvdTCkX4vuBtiaFHu_rBu4xJr_HQmY59dwtHiSYl5Df4cEizLwnrKEAdNCWUAjwygrlmMu7RFD_TrD6PkYuPKkcw8hXwke8FkrDycxZK45QaelXsvlVcnCs/s1600/2015-04-10_1735_PS1Result.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuPU0aPloQQOF9gWYoI4AipvdTCkX4vuBtiaFHu_rBu4xJr_HQmY59dwtHiSYl5Df4cEizLwnrKEAdNCWUAjwygrlmMu7RFD_TrD6PkYuPKkcw8hXwke8FkrDycxZK45QaelXsvlVcnCs/s400/2015-04-10_1735_PS1Result.png" /></a></div>
<br />
<b>Quickly add modified (but not new) files</b><br />
I frequently have a few untracked files in my repo which I do not want to add to the repo, so git adding everything doesn't work for me. So I created a `gadauto` command to do that for me.<br />
<pre class="brush: shell">function gadauto() {
git status | ack 'modified:|deleted:' gst.txt | sed -Ee 's/modified:|deleted://' | xargs git add
}
export -f gadauto
</pre>
Now a quick `gadauto` command adds modified or deleted files to the staging area<br />
<br />
<b>Quickly add to and amend a commit</b><br />
We use git-review and the preferred workflow is to first create a new commit and then amend that commit until you're ready to push it for review. The command to do that is "git commit --amend". To shorten this, I created a simple alias:<br />
<pre class="brush: shell">alias gcamd='git commit --amend'
</pre>
Now when I'm ready to add my updated files and amend the commit, all I have to do is type:<br />
<pre class="brush: shell">gadauto && gcamd
</pre>
I could shorten that even more but I like to type just a bit more when I could make a messy mistake.<br />
<br />
<b>Quickly add merge conflicts that have been resolved</b><br />
The gadauto command works great but doesn't handle the use case for files that have been conflicted and then resolved during a git rebase, for example. Handling this is just a simple modifaction of the gadauto commend described above.<br />
<pre class="brush: shell">function gadboth() {
git status | ack 'both modified:|both added:' gst.txt | sed -Ee 's/both modified:|both added://' | xargs git add
}
export -f gadboth
</pre>
Now Now following a `git rebase master` and conflict resolution, I just type:<br />
<pre class="brush: shell">gadboth
</pre>
and all of the files that were just resolved are added to the staging area, ready for the `git rebase --continue`.<br />
<br />
Hopefully this will help someone; I'd be interested to know what your favorite git shortcuts are.<br />
<br />
<b>Update</b> If you want to see some really cool git shortcuts, visit <a href="https://github.com/thoughtbot/dotfiles/blob/master/gitconfig" target="_blank">Thoughtbot's dotfiles</a>.<br />
<br />JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-41820606899083227912015-03-09T03:40:00.002-07:002015-03-09T03:41:09.129-07:00No, it wasn't the "world's least helpful error message"Last wee, I commented about a problem with AngularJS and an error message that I found incredibly unhelpful. Turns out it was more helpful than I knew although there are still problems.<br />
<br />
Here's what it looked like in Chrome DevTools:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaEz-RwjRCNWeAeyGZsxl9t-3bl0cH46I6Opzu_vL52sqPM_OfO_2Gb6W8SPLT-4-zZREA1EL9hI9751EW0Gia7OSlj_RNLQwiJO4gcUP7YGMCQsvL2zedRi58xnrJff9uICsF6fqn9lU/s1600/2015-03-09_0314_AngularErrorMsg-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaEz-RwjRCNWeAeyGZsxl9t-3bl0cH46I6Opzu_vL52sqPM_OfO_2Gb6W8SPLT-4-zZREA1EL9hI9751EW0Gia7OSlj_RNLQwiJO4gcUP7YGMCQsvL2zedRi58xnrJff9uICsF6fqn9lU/s1600/2015-03-09_0314_AngularErrorMsg-1.png" height="64" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
But that's a live link, so all you have to do is click on it and you're redirected to an AngularJS page that (hopefully) provides more information. Here's <a href="http://errors.angularjs.org/1.3.8/$injector/modulerr?p0=userApp&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20http%3A%2F%2Ferrors.angularjs.org%2F1.3.8%2F%24injector%2Fmodulerr%3Fp0%3DmainCtrl%26p1%3DError%253A%2520%255B%2524injector%253Anomod%255D%2520Module%2520%27mainCtrl%27%2520is%2520not%2520available!%2520You%2520either%2520misspelled%2520the%2520module%2520name%2520or%2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a%2520module%2520ensure%2520that%2520you%2520specify%2520the%2520dependencies%2520as%2520the%2520second%2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.3.0-build.3042%252Bsha.76e57a7%252F%2524injector%252Fnomod%253Fp0%253DmainCtrl%250A%2520%2520%2520%2520at%2520chrome-extension%253A%252F%252Fighdmehidhipcmcojjgiloacoafjmpfk%252Fdist%252Fhint.js%253A120%253A12%250A%2520%2520%2520%2520at%2520chrome-extension%253A%252F%252Fighdmehidhipcmcojjgiloacoafjmpfk%252Fdist%252Fhint.js%253A215%253A17%250A%2520%2520%2520%2520at%2520ensure%2520(chrome-extension%253A%252F%252Fighdmehidhipcmcojjgiloacoafjmpfk%252Fdist%252Fhint.js%253A139%253A38)%250A%2520%2520%2520%2520at%2520module%2520(chrome-extension%253A%252F%252Fighdmehidhipcmcojjgiloacoafjmpfk%252Fdist%252Fhint.js%253A213%253A14)%250A%2520%2520%2520%2520at%2520angular.module%2520(chrome-extension%253A%252F%252Fighdmehidhipcmcojjgiloacoafjmpfk%252Fdist%252Fhint.js%253A686%253A31)%250A%2520%2520%2520%2520at%2520angular.module%2520(chrome-extension%253A%252F%252Fighdmehidhipcmcojjgiloacoafjmpfk%252Fdist%252Fhint.js%253A1019%253A38)%250A%2520%2520%2520%2520at%2520http%253A%252F%252Fajax.googleapis.com%252Fajax%252Flibs%252Fangularjs%252F1.3.8%252Fangular.min.js%253A35%253A424%250A%2520%2520%2520%2520at%2520s%2520(http%253A%252F%252Fajax.googleapis.com%252Fajax%252Flibs%252Fangularjs%252F1.3.8%252Fangular.min.js%253A7%253A302)%250A%2520%2520%2520%2520at%2520g%2520(http%253A%252F%252Fajax.googleapis.com%252Fajax%252Flibs%252Fangularjs%252F1.3.8%252Fangular.min.js%253A35%253A202)%250A%2520%2520%2520%2520at%2520http%253A%252F%252Fajax.googleapis.com%252Fajax%252Flibs%252Fangularjs%252F1.3.8%252Fangular.min.js%253A35%253A371%0A%20%20%20%20at%20Error%20(native)%0A%20%20%20%20at%20http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A6%3A416%0A%20%20%20%20at%20http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A36%3A98%0A%20%20%20%20at%20s%20(http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A7%3A302)%0A%20%20%20%20at%20g%20(http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A35%3A202)%0A%20%20%20%20at%20http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A35%3A371%0A%20%20%20%20at%20s%20(http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A7%3A302)%0A%20%20%20%20at%20g%20(http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A35%3A202)%0A%20%20%20%20at%20Ob%20(http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A38%3A435)%0A%20%20%20%20at%20d%20(http%3A%2F%2Fajax.googleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.3.8%2Fangular.min.js%3A17%3A350" target="_blank">the link</a>, and here's the link text iteself: quite a mouth/link full:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQBedk46CphjIHog3Xwa2l2JSdC-VbTXoKQXt_xB0CKqfUHhpiCsdR32FEUnU79u2y0dnPJVAsqzUJt6byzr1iQ1a0aIxEmd9DjaXp4iEuU9Q815cVzWmX-OYWyRtBYq4Oki5VnljyEKE/s1600/2015-03-09_0327_AngularJSErrorMsg-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQBedk46CphjIHog3Xwa2l2JSdC-VbTXoKQXt_xB0CKqfUHhpiCsdR32FEUnU79u2y0dnPJVAsqzUJt6byzr1iQ1a0aIxEmd9DjaXp4iEuU9Q815cVzWmX-OYWyRtBYq4Oki5VnljyEKE/s1600/2015-03-09_0327_AngularJSErrorMsg-2.png" height="314" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
What you get when you click on this link is a customized error page -- pretty neat way to handle things and sometimes it works very well -- this example was spot on:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD7tHhEH85ePvPRuNnWOcGJRfGkD3mJTDEtw_kgEwziIywGvFg8x4kdZwbIr4A6IJJbYOf-5gID_yAp3r5gKNeTNFp3Kr6Z4vBuEe9V6pRhItNczCrf01u5udONyzQ-5d7kC0PxnsFXxI/s1600/2015-03-09_0330_AngularErrorMsg-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjD7tHhEH85ePvPRuNnWOcGJRfGkD3mJTDEtw_kgEwziIywGvFg8x4kdZwbIr4A6IJJbYOf-5gID_yAp3r5gKNeTNFp3Kr6Z4vBuEe9V6pRhItNczCrf01u5udONyzQ-5d7kC0PxnsFXxI/s1600/2015-03-09_0330_AngularErrorMsg-3.png" height="416" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
But here's one that wasn't nearly so helpful:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRJs2h7hqdPREtc869ZqHoaX2LadYA12NZXgKmfY1jljPWcL52ZNveckunwz_SrxZeQGP0CHx81j_xdtBqPk41qb8aAlKhSG6L630EOjlN_Yo8lvW_c8VkQAa7nSyqCPWhdWzodox0LoE/s1600/2015-03-09_0334AngularErrorMsg-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRJs2h7hqdPREtc869ZqHoaX2LadYA12NZXgKmfY1jljPWcL52ZNveckunwz_SrxZeQGP0CHx81j_xdtBqPk41qb8aAlKhSG6L630EOjlN_Yo8lvW_c8VkQAa7nSyqCPWhdWzodox0LoE/s1600/2015-03-09_0334AngularErrorMsg-5.png" height="434" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
So...much better than I thought, although there are still some head-scratchers.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-81372063135800818942014-12-23T04:10:00.000-08:002014-12-23T04:10:00.710-08:00Trust your auto indentation... reallyI knew I liked my editor's autoindent feature, but I didn't realize how valuable it was until this situation popped up. I was reading the book <a href="https://rebuilding-rails.com/" target="_blank">Rebuilding Rails</a> (worth a look) and happened to code a simple method as part of one example.<br />
<pre class="brush: ruby">def call(env)
(klass, act) = get_controller_and_action(env)
controller = klass.new(env)
text = controller.send(act)
[200, {"Content-Type" => "text/html"},
...
</pre>As you can see the indentation isn't correct for the 'controller =' and 'text =' lines and try as I might <br />
<pre class="brush: bash">vim gg=G</pre>it wouldn't indent properly. I chalked it up as low priority weirdness to be dealt with later until I loaded up the program and got this error:<br />
<h2 style="color: #666666; font-family: sans-serif; font-weight: normal; margin: 0px 0px 0.8em; padding: 0px;">undefined local variable or method `controller' for ...</h2>I finally dropped into HexDump mode using vim's nifty little <a href="http://vim.wikia.com/wiki/Hex_dump" target="_blank">xxd utility</a><br />
<pre class="brush: shell">:%!xxd
</pre>and got this output for the lines that wouldn't indent properly.<br />
<pre class="brush: shell">0000060: 6c28 656e 7629 0a20 2020 2020 2028 6b6c l(env). (kl
0000070: 6173 732c 2061 6374 2920 3d20 6765 745f ass, act) = get_
0000080: 636f 6e74 726f 6c6c 6572 5f61 6e64 5f61 controller_and_a
0000090: 6374 696f 6e28 656e 7629 0a20 2020 2020 ction(env).
00000a0: 20c2 a0c2 a0c2 a0c2 a0c2 a063 6f6e 7472 ..........contr
00000b0: 6f6c 6c65 7220 3d20 6b6c 6173 732e 6e65 oller = klass.ne
00000c0: 7728 656e 7629 0a20 2020 2020 20c2 a0c2 w(env). ...
00000d0: a0c2 a0c2 a0c2 a074 6578 7420 3d20 636f .......text = co
00000e0: 6e74 726f 6c6c 6572 2e73 656e 6428 6163 ntroller.send(ac
</pre>Notice the 'c2a0' sequence repeated five times in front of the 'controller =' and 'text =' lines. Those non-printing characters were the culprit: the actual variable name was 0x'c2a0c2a0c2a0c2a0c2a0'controller, for example so when I tried to use 'controller', that was still undefined. How those byte pairs got inserted remains a mystery. But I am guessing from <a href="http://stackoverflow.com/questions/2774471/what-is-c2-a0-in-mime-encoded-quoted-printable-text" target="_blank">this post</a> that it was a non-breaking Unicode character.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-65808091077131885062014-05-24T08:12:00.000-07:002015-03-09T03:42:05.654-07:00Bash regular expression idosyncraciesAs a scripting language, bash can certainly be considered a bit unusual, but it's got the power and the later versions keep adding functionality which makes it easier to get things done. However, even with the latest additions, it's still got its quirks.<br />
<div><br />
</div><div>Consider the regular expression matching feature (added in version 3)</div><div><br />
</div><div>Given a value<br />
<blockquote class="tr_bq">MY_STRING='/usr/local/heroku/bin:/Users/jseidel/bin' </blockquote>let's say you want to check for the occurrence of the string 'jseidel/bin'. You might try the following script just to check things out in advance.</div><br />
<div><pre class="brush: shell">#!/bin/bash
export MY_STRING=/usr/local/heroku/bin:/Users/jseidel/bin
if [[ "$MY_STRING" =~ /.*jseidel\/bin.*/ ]]; then
echo "Found .* version"
fi
if [[ "$MY_STRING" =~ /.*jseidel\/bin.+/ ]]; then
echo "Found .+ version"
fi
if [[ "$MY_STRING" =~ 'jseidel\/bin' ]]; then
echo "Found single-quoted version"
fi
if [[ "$MY_STRING" =~ "jseidel\/bin" ]]; then
echo "Found double-quoted version"
fi
if [[ "$MY_STRING" =~ jseidel\/bin ]]; then
echo "Found un-quoted version"
fi</pre></div><div>What you would find (even in bash 4.3.11) is that only the last one works. </div><div><ol><li>The first one fails because there are no characters after the 'bin' at the end of the string. This is actually pilot error in that there are no characters following that last 'n' so 'one or more characters' is false.</li>
<li>The second one fails because bash apparently doesn't properly support the '+' operator.</li>
<li>The third and fourth ones fail because bash apparently doesn't properly handle REs in a string. Various docs say the quotes are optional, but that doesn't seem to be the case, either for single- or double-quoted strings.</li>
</ol></div><div><br />
</div><div>Took me quite a bit to figure this one out; hope it helps someone else.</div>JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-517477112620401412014-03-01T10:04:00.000-08:002014-03-01T10:34:32.529-08:00I love Linux, but not Nvidia!I am really beginning to hate Nvidia! I just spent another 4 hours recovering from an update which confused the Nvidia drivers to no end. After lots of Googling, rebooting, and testing, here's what worked for me; hopefully it will help others (there certainly are lots of posts out there with similar problems).<br />
<br />
For the record, I'm running Kubuntu 13.10, 64-bit, with a GeForce GT630 Nvidia card and have two monitors configured side-by-side through nvidia-settings. Also for the record, the later versions of nvidia-settings actually work very well with multiple monitors, unlike earlier versions which required a great deal of hackery.<br />
<br />
<u><b>Initial symptoms</b></u><br />
After updating some applications I decided to reboot (I am rather paranoid these days about updates and rebooting as I have had numerous Nividia issues) I am able to load KDM (actually lightdm for my version) and after attempting to start my account, KDM drops back to the basic GUI login prompt... which repeatedly keeps being interrupted and returning to login.<br />
<br />
The first problem I dealt with was finding and fixing any files in my home directory that were owned by root. This is a well-known problem on Ubuntu (maybe other distributions?) wherein root owns a few files and interrupts KDM loading because of permission issues. To resolve these issues, I ran<br />
<pre class="brush: shell">find -user root</pre>
in my home directory and it listed all files owned by root. Then I ran:<br />
<pre class="brush: shell">sudo chown jseidel:jseidel *</pre>
on the root-owned files to make them mine. If you have links, you may have to use the -h option on chown.<br />
<br />
One particular problem file is .Xauthority, which sometimes, some way, gets owned by root and will certainly stop your desktop from loading, either in KDM or Gnome.<br />
<br />
<u><b>What finally worked for me</b></u><br />
As I progressed through numerous tests, I noticed that when I dropped into a shell (Ctrl-Alt-F1), I got the message<br />
<pre class="brush: shell">initctl: Event failed</pre>
which can indicate an Nvidia driver issue (I've had these many times before -- always a PITA!)<br />
<br />
When I looked in the Kernel log after a failure:<br />
<pre class="brush: shell">less /var/log/kern.log</pre>
I found a series of messages like:<br />
<pre class="brush: shell">API mismatch...
Nvidia client is version 331.49
Kernel is version 319.60
</pre>
So... the Nvidia installer goofed! What finally worked for me in this situation was the following:<br />
<pre class="brush: shell">sudo apt-get remove --purge nvidia*
</pre>
followed by reinstalling the appropriate Nvidia driver, V331.20 in my case. I do my installations from the command line like so, after downloading the desired Nvidia driver:<br />
<pre class="brush: shell">cd ~/Drivers
chmod +x NVIDIA-Linux-x86_64-331.20.run
sudo ./NVIDIA-Linux-x86_64-331.20.run</pre>
<br />
Some useful commands:<br />
<pre class="brush: shell">less /proc/driver/nvidia/version # Show installed version
</pre>
<br />
Some possibly useful links:<br />
<a href="http://askubuntu.com/questions/41681/blank-screen-after-installing-nvidia-restricted-driver">http://askubuntu.com/questions/41681/blank-screen-after-installing-nvidia-restricted-driver</a><br />
<br />
<a href="http://news.softpedia.com/news/How-to-Install-the-Latest-NVIDIA-331-20-Drivers-in-Ubuntu-13-10-399182.shtml">http://news.softpedia.com/news/How-to-Install-the-Latest-NVIDIA-331-20-Drivers-in-Ubuntu-13-10-399182.shtml</a><br />
<br />
<a href="http://ubuntugenius.wordpress.com/2013/11/29/ubuntu-13-10-permissions-fix-this-operation-cannot-continue-since-proper-authorization-was-not-provided-halts-software-updater-shutdown-drive-mounting-dvd-playback-etc/" target="_blank">http://ubuntugenius.wordpress.com/2013/11/29/ubuntu-13-10-permissions-fix-this-operation-cannot-continue-since-proper-authorization-was-not-provided-halts-software-updater-shutdown-drive-mounting-dvd-playback-etc/ </a><br />
<br />
<a href="http://ubuntuforums.org/showthread.php?t=1745841">http://ubuntuforums.org/showthread.php?t=1745841</a><br />
<br />
<br />JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-10442289219423259962014-01-13T14:16:00.001-08:002014-01-13T14:16:06.168-08:00Getting Phusion Passenger to update the site...First time using Phusion Passenger on a Rails site that I'm supporting for a new client, hosted on dreamhost.com.<br />
<br />
I made a very simple upgrade to a view but it wouldn't display the updated content. If you've worked with Passenger before, you probably know this, but Passenger won't reload any new content unless you update the modification date of a special file "tmp/restart.txt", as in:<br />
<pre class="brush: shell">touch tmp/restart.txt</pre>
This information is buried fairly deep in the documentation so I didn't notice it immediately. Hopefully this will save someone else a bit of head-scratching.<br />
<br />
<br />
<br />JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-44796457490096172802013-10-10T02:11:00.000-07:002013-10-10T02:11:11.697-07:00Ruby variable assignment... how does that really work?I've recently been working to improve my Ruby skills, so I've started coding many of the exercises in "<a href="http://www.amazon.com/Cracking-Coding-Interview-Programming-Questions/dp/098478280X/ref=sr_1_1?ie=UTF8&qid=1381393994&sr=8-1&keywords=cracking+the+coding+interview" target="_blank">Cracking the Coding Interview</a>" by Gayle Laakmann McDowell, implementing the problems in TDD/Ruby instead of in Java.<br />
<br />
In the process, I ran into a real puzzler -- a bug that I and several friends looked at and couldn't quickly solve. It turned out to be really basic Ruby.<br />
<br />
I had developed the following code as part of the answer to CtCI's problem 9.4: Create all the subsets from a set (you can find the complete solution <a href="https://github.com/JESii/CtCI-with-Ruby-TDD/blob/master/chapter_09/9.4_subsets.rb" target="_blank">here</a>).<br />
<pre class="brush: ruby">def ss_raw set
@result_raw << set
return if set.size == 1
set.each_index do |i|
tmp = set
tmp.delete_at i
ss_raw(tmp)
end
end
</pre>
The intent of this code was to recursively remove one element from the input and generate the subsets. The tmp variable was intended as a proxy for the set variable so that the each_index block could process each subset by deleting one element at a time. It didn't work and gave some really perplexing results, apparently replacing the contents of @result_raw rather than appending to it.<br />
<br />
Unfortunately, I forgot the basics: <br />
<pre class="brush: ruby">tmp = set</pre>
doesn't create a new variable, it simply creates a new pointer to the same location in storage that set points to. Therefore, modifying tmp also modifies set and the whole process goes south. The solution? Just say <br />
<pre class="brush: ruby">tmp = set.clone</pre>
and you get a new and separate variable and the method works as expected.<br />
<br />
In many cases, an oversight like this won't cause a problem, but in this cae it wound up modifying the method parameter as well as modifying the basis of the each_index block... a definite no-no and exactly what I was trying to avoid in the first place.<br />
<br />
Hopefully this little reminder will help someone else avoid the pain.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-81891216694361396762013-09-06T17:04:00.000-07:002013-09-07T09:17:37.260-07:00RVM oddityEven though I don't use all the features, I love <a href="http://rvm.io/" target="_blank">RVM</a>... it generally makes my life easier when dealing with new/multiple versions of Ruby and/or different <a href="http://rvm.io/gemsets" target="_blank">gemsets</a>. Definitely recommended, although I know there are people who swear by <a href="https://github.com/sstephenson/rbenv" target="_blank">rbenv</a>. In either case, they both make magical things happen by working in the background... all terrific as long as it works. But when it doesn't work as expected, it can really be difficult to diagnose and resolve.<br />
<br />
Case in point. The project I'm working on is done in Ruby 1.9.3-p327 and I have a 'Rails3213' gemset that I've been using. I've always used a .rvmrc file (analogous to .vimrc or .bashrc) to specify my desired ruby and gemset versions on a per-project basis. Somewhere along the line, I upgraded RVM and then get a prompt that I should switch from .rvmrc to .ruby-version for my ruby/gemset information, which I followed.<br />
<br />
Mistake. Unbeknownst to me, the information in my .ruby-version file was silently ignored and I was using gems from another location inside RVM. It only came to light when I encountered some strange test results and then debugging changes I made to a gem seemed to disappear. I finally got a clue of what was going on through a tell-tale signal from vim's "tag jump" (Ctrl-]) command : I had been working on the gem in:<br />
<pre class="brush: shell">~/.rvm/gems/ruby-1.9.3-p327@rails3213/clearance-1.0.1</pre>
but when Ctrl-] I jumped to a method definition inside vim, I was routed to a file in:<br />
<pre class="brush: shell">~/.rvm/gems/ruby-1.9.3-p327/clearance-1.0.1</pre>
Going back to the RVM site, I saw that .rvmrc was still (probably always had been?) the preferred way to define things, so good-bye .ruby-version, hello .rvmc, and things were back to normal.<br />
<br />
Lesson Learned. For the future, if I run into such strangeness again, I'll look to my environment much earlier to make sure I'm working with the right set of gems or rubies or whatever. One thing I was reminded of that I had forgetten was that rvm nicely tells you the ruby & gemset that you're using when you enter a project directory with a .rvmrc file:<br />
<blockquote class="tr_bq">
Using /home/jseidel/.rvm/gems/ruby-1.9.3-p327 with gemset rails3213</blockquote>
<div>
And if I need to check up on things, a quick :</div>
<pre class="brush: shell">rvm list && rvm gemset list</pre>
tells me all I need to know that I'm in the right place:<br />
<pre class="brush: shell">rvm rubies
ruby-1.9.2-p290 [ x86_64 ]
=* ruby-1.9.3-p327 [ x86_64 ]
ruby-1.9.3-p448 [ x86_64 ]
ruby-2.0.0-p0 [ x86_64 ]
ruby-2.0.0-p247 [ x86_64 ]
# => - current
# =* - current && default
# * - default
gemsets for ruby-1.9.3-p327 (found in /home/jseidel/.rvm/gems/ruby-1.9.3-p327)
(default)
global
rails3020
rails3212
=> rails3213
</pre>
Another thing I'm going to do more carefully is re-create my tags file whenever there are gem changes so that I'm always up-to-date. I found a great bash one-liner (if I can remember where I found this, I'll come back and give credit):<br />
<pre class="brush: shell">ctags -R . $(bundle show --path)</pre>
This puts all your gems into the tags file which means that you can Ctrl-] on methods that are included in Rails or other gems.<br />
<br />
<b>Update:</b><br />
Now when I enter the project directory, I get the RVM warning:<br />
<pre class="brush: shell">You are using '.rvmrc', it requires trusting, it is slower and it is not compatible with other ruby managers,
you can switch to '.ruby-version' using 'rvm rvmrc to [.]ruby-version'
or ignore this warnings with 'rvm rvmrc warning ignore /home/jseidel/Dev/MyPasswords_Upgrade/.rvmrc',
'.rvmrc' will continue to be the default project file in RVM 1 and RVM 2,
to ignore the warning for all files run 'rvm rvmrc warning ignore all.rvmrcs'.</pre>
It may be slower (I sure can't tell the difference), but I'm going to stick with .rvmrc as long as I can.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-20760080876556839832013-08-30T11:03:00.000-07:002013-08-30T11:03:09.887-07:00Another take on WickedPDF, wkhtmltopdf, and heroku...A while back, I commented on <a href="http://www.nubyrubyrailstales.com/2013/06/wickedpdf-wkhtmltopdf-and-herokua.html" target="_blank">installing WickedPDF/wkhtmltopdf to work on Heroku</a>. But there are still a few more things to be done to get it all to work. Here's a checklist I provided to a friend when they were having problems.<br />
<br />
1. Added gem "wickedpdf" in my Gemfile<br />
<br />
2. Copied the appropriate wkhtmltopdf binary into the bin/ directory in my Rails app<br />
<br />
3. Updated config/initializers/wicked_pdf.rb to include:<span class="Apple-tab-span" style="white-space: pre;"> </span><br />
<pre class="brush: ruby">WickedPdf.config = { :exe_path => "#{Rails.root}/bin/wkhtmltopdf" }</pre>so that it knows where I've stored the binary<span class="Apple-tab-span" style="white-space: pre;"> </span> <br />
<br />
4. Added the following to the show method in my controller; this is what drives the whole PDF creation. There are many more options available; this is just my minimal setup.<br />
<pre class="brush: ruby">format.pdf do
render :pdf => "AccRpt_#{@accident_report.id}",
:template => 'accident_reports/show', :formats => [:pdf], :handlers => [:haml],
:show_as_html => params[:debug].present?,
# allow debuging based on url param
:page_size => :letter,
#:debug_javascript => true,
:footer => {
:left => "#{Time.now}",
:center => "Accident Report ##{@accident_report.id}",
:right => "Page [page] of [topage]",
:line => true },
:user_style_sheet => "/assets/stylesheets/pdf.css",
:layout => "pdf.html" #false end
</pre><br />
NOTE The "show_as_html" option. By adding this, you can add "&debug=1" to your URL and see what the html would have been. Sometimes helpful in tweaking the output.<br />
<br />
5. Created a new stylesheet called pdf.css and a new javascript file called pdf.js. They don't do a lot, but I pretty much extract out as much stuff as I can so that only what's absolutely necessary is included here.<br />
<br />
6. Created a new layout called pdf.html.haml. Cuts out all the overhead associated with 'normal' html display. Again, nothing but what's absolutely necessary to produce the PDF.<br />
<br />
7. In that layout, used a Content Delivery Network (CDN) for all the jquery stuff I use - that's directly available to wkhtmltopdf since it's a full URL (Remember I said that you can't use relative URL's with wkhtmltopdf).<br />
<br />
8. Where necessary, used the WickedPDF-provided helpers to include other stylesheets/javscript files. Make sure you read the "Usage Conditions - Important!" section on the WickedPDF page - I added that after I ran into problems.<br />
<br />
9. Created a show.pdf.haml view template with the data/formatting I wanted.<br />
*NOTE**: You can render an existing html partial as part of your PDF output, but to do so you MUST add:<br />
<pre class="brush: ruby">":handlers => [:haml], :formats => [:pdf]"</pre>to the render and then you can actually reuse an html partial, such as _body.html.haml. If you don't do that, you'll get a complaint that it doesn't have the proper pdf template.<br />
<br />
10. Added 'PDF' links at various places, such as in the index view and the show view.<br />
<br />
11. Made sure that the PDF mime-type was registered:<br />
<pre class="brush: ruby">'Mime::Type.register "application/pdf", :pdf' in config/initializers/mime_types.rb </pre><br />
I believe that covers it all.<br />
<div><br />
</div>JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-41605060348287549572013-07-24T07:14:00.000-07:002013-07-24T07:14:14.189-07:00Setting up Jasmine for JavaScript testsTesting your JavaScript with Jasmine is A Good Thing, so I figured it would be easy to get started. There's great documentation at the <a href="http://pivotal.github.io/jasmine/" target="_blank">Github/Jasmine</a> site as to how the tests are described and what matchers are available, but there didn't seem to be any clear-cut instructions on how to get the whole thing to run once you've installed it. There's talk of a file 'SpecRunner.html' and there were a number of posts asking "Where's my SpecRunner file?". And even after you install the gem, there's no documentation provided with rake -T to tell you what to do.<div><br />
</div><div>Turns out it's really drop-dead simple so - just in case someone else has this problem - here's what I did.</div><pre class="brush: shell">cd /path/to/your/project
gem install jasmine
rake jasmine
http://localhost:8888 in your browser
</pre><div>The tests in your project are run automatically.</div><div><br />
</div>JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-1985297920473532482013-06-26T09:24:00.000-07:002013-06-26T09:24:17.584-07:00Silence is NOT golden...Here's a sneaky little routing gotcha. Let's say you're TDD'ing a new feature and your test requires a 'new' action. Following best practice, you start by saying<br />
<pre class="brush: ruby">resources :accounts, only: [:new]</pre>
instead of<br />
<pre class="brush: ruby">resources :accounts</pre>
Before we get to the gotcha, why would I say 'best practice' is to avoid the complete definition? Well... remember one of the core principles of TDD:<br />
<blockquote class="tr_bq">
Write ONLY enough code to get the test to pass.</blockquote>
This means that adding only a single action is all I need do to get the test to pass, so that's all I add (even though I actually write more code in that case). This idea certainly isn't new to me; for example, check out the <a href="https://learn.thoughtbot.com/workshops/18-test-driven-rails">Thoughtbot TDD class</a> if you're interested in learning more: great resource!<br />
<br />
On a broader scale, assuming you've got this whole feature written, you still want to avoid adding any unused actions. If you use every RESTful action except for :delete, then you don't want to allow the delete action to exist, as it offers one more opportunity for the bad guys to attack your site.<br />
<br />
Back to the gotcha. Let's say that you code the above example, but - oops - you make a typo and actually code:<br />
<pre class="brush: ruby">resources :accounts, only: [:newt]</pre>
Guess what - this error is silently ignored. No route is generated and you don't get an error message. In fact, the only: and except: options just ignore anything that isn't on the list of RESTful actions (index, create, new, update, edit, show, destroy) and give no warning.<br />
<br />
It's easy to overlook this as we're used to having Rails give us helpful error messages if we code something incorrectly. Not so in this case.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-39432251672385554982013-06-23T09:28:00.000-07:002013-06-23T09:28:42.249-07:00"Can't find generator help."Here is a tricky little case of mis-direction. I've gotten used to github, heroku, and a host of other command-line tools that allow you to add the keyword 'help' to the command to get some basic usage info. Not so, Rails. If you type the following <u><i>inside</i></u> a Rails application (I'm on Rails 3.2.13 or 3.2.12):<br />
<pre class="brush: shell">rails generate help
</pre>
you get the decidedly unhelpful message:<br />
<pre class="brush: shell">Could not find generator help.
</pre>
Rails thinks you're looking to generate a "help" something or other.<br />
<br />
However, if you type the same command <u><i>outside</i></u> a Rails app, you get what you'd expect:<br />
<pre class="brush: shell">Usage:
rails new APP_PATH [options]
Options:
-r, [--ruby=PATH] # Path to the Ruby binary of your choice
# Default: /home/jseidel/.rvm/rubies/ruby-1.9.3-p194/bin/ruby
-b, [--builder=BUILDER] # Path to a application builder (can be a filesystem path or URL)
-m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL)
[--skip-gemfile] # Don't create a Gemfile
[--skip-bundle] # Don't run bundle install
-G, [--skip-git] # Skip Git ignores and keeps
-O, [--skip-active-record] # Skip Active Record files
-S, [--skip-sprockets] # Skip Sprockets files
-d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db/sqlserver/jdbcmysql/jdbcsqlite3/jdbcpostgresql/jdbc)
# Default: sqlite3
-j, [--javascript=JAVASCRIPT] # Preconfigure for selected JavaScript library
# Default: jquery
-J, [--skip-javascript] # Skip JavaScript files
[--dev] # Setup the application with Gemfile pointing to your Rails checkout
[--edge] # Setup the application with Gemfile pointing to Rails repository
-T, [--skip-test-unit] # Skip Test::Unit files
[--old-style-hash] # Force using old style hash (:foo => 'bar') on Ruby >= 1.9
Runtime options:
-f, [--force] # Overwrite files that already exist
-p, [--pretend] # Run but do not make any changes
-q, [--quiet] # Suppress status output
-s, [--skip] # Skip files that already exist
Rails options:
-h, [--help] # Show this help message and quit
-v, [--version] # Show Rails version number and quit
Description:
The 'rails new' command creates a new Rails application with a default
directory structure and configuration at the path you specify.
You can specify extra command-line arguments to be used every time
'rails new' runs in the .railsrc configuration file in your home directory.
Note that the arguments specified in the .railsrc file don't affect the
defaults values shown above in this help message.
Example:
rails new ~/Code/Ruby/weblog
This generates a skeletal Rails installation in ~/Code/Ruby/weblog.
See the README in the newly created application to get going.
</pre>
JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-56206245019589335842013-06-20T08:58:00.002-07:002013-06-20T09:01:16.425-07:00wkhtmltopdf and all those gems...<br />
Looking back on my wkhtmltopdf / Heroku experience, there are several take-aways for me.<br />
<br />
First off, I had contacted Heroku technical support for assistance, and got back a quick reply:<br />
<br />
<blockquote class="tr_bq">
<i>Unfortunately we don't support installing additional libraries or binaries to our stacks. The best workaround is to vendor these into your project. You'll need to use 64-bit Linux versions to make them work on Heroku; compiling statically can also help ensure that any dependencies needed are included. Similarly, for gems that depend on external libraries, we recommend compiling the gem statically and vendoring it into your project (which is what it seems you have already tried to do).</i><br />
<i><br /></i>
<i>We realize this is not a trivial task and can be very difficult to get working, and we hope to provide an easier way to do this in the future. Unfortunately we do not have an ETA on when this improved functionality will be available.</i><br />
<i><br /></i>
<i>If you do wish to try to vendor, or if you have no success with vendoring, your binary, library, or gem, you can use Heroku as your build environment. One of our engineers created a build server that allows you to upload source code, run the compilation step, and then download the resulting binary. You can find this <a href="https://github.com/heroku/vulcan">project on Github</a>.</i></blockquote>
Kinda makes sense to me: they have to protect their stack. I'm not surprised, just a wee bit disappointed that they don't make something as widespread and generally useful as wkhtmltopdf available as a standard binary. Oh well; they do seem to be looking into a better solution, and they did provide some good information. The link to that project looks like it might be useful in the future.<br />
<br />
Once I got my app with PDF working, I submitted a documentation update to <a href="https://github.com/mileszs/wicked_pdf" target="_blank">WickedPDF</a>, which was promptly merged (thanks @unixmonkey). Hopefully that'll provide some assistance to other folks.<br />
<br />
One of the problems I had with the various gems that include binary versions of wkhtmltopdf was that they hadn't been updated recently... sometimes in over a year. So I figured that I should be a good citizen and update them with the latest version (0.11.0 rc1 at the time I'm writing this). To that end, I just submitted a pull request for the <a href="https://github.com/bradphelan/wkhtmltopdf-heroku" target="_blank">wkhtmltopdf-heroku gem</a>. It purports to automagically support WidkedPDF, PDFKit, and wisepdf, depending on which one is loaded (I've not tested it myself).<br />
<br />
However, when I started looking at the wkhtmltopdf-binary "family" of gems, it was pretty involved: <a href="https://github.com/steerio/wkhtmltopdf-binary" target="_blank">steerio/wkhtmltopdf-binary</a> (the one I originally investigated), is forked from <a href="https://github.com/unixmonkey/wkhtmltopdf_binary_gem" target="_blank"> unixmonkey/wkhtmltopdf_binary_gem</a> which in turn is forked from <a href="https://github.com/michaelcollas/wkhtmltopdf_binary_gem" target="_blank"> michaelcollas/wkhtmltopdf_binary_gem</a>, which is itself forked from <a href="https://github.com/zakird/wkhtmltopdf_binary_gem" target="_blank"> zakird/wkhtmltopdf_binary_gem</a> and who know how many forks there are on side legs...<br />
<br />
That's not something I'm inclined to sign up for! And given the simplicity of adding the single binary in my app (now that I know how to do it), I'm inclined to keep it that way and avoid any of those gems entirely. In fact, I'm beginning to come to the conclusion that it may be better in the long run to install things manually for what I'll call "minor" gem functionality [Note to self: come up with better terminology for this situation] such as this one: I know more of what's going on and have more control over the final outcome.<br />
<br />
It also reminds me that, while we work hard to DRY up our code, the proliferation of similar gems on the web is anything but DRY. JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-69072275180942371702013-06-14T15:22:00.000-07:002013-08-30T11:13:21.135-07:00WickedPDF, wkhtmltopdf, and Heroku...a tricky combinationAt least for me as it took a bit of twiddling to get it running, first in development and then on the Heroku stack. But with a lot of testing and some help from <a href="https://github.com/mileszs/wicked_pdf" target="_blank">unixmonkey</a> and my friends at <a href="http://www.thoughtbot.com/" target="_blank">Thoughtbot</a>, I finally got it working. Here's how.<br />
<br />
First off, I'm running Rails 3.2.13, WickedPDF 0.9.6, and wkhtmltopdf 0.11.0 rc1, running development on a Linux (Kubuntu 10.04) 64-bit machine. NB: The machine you're developing on and deploying to can be different but then you'll have to have two (or more) different copies of wkhtmltopdf and properly configure WickedPDF to handle that difference. I cover that later in this post.<br />
<h2><b>Install wkhtmltopdf</b></h2>The <a href="https://code.google.com/p/wkhtmltopdf/downloads/list" target="_blank">download site for wkhtmltopdf</a> has pretty much everything you need for whatever machine architecture you are working with. Download the matching flavor for your development and production machines. If you have only a single machine architecture, then just call your one binary 'wkhtmltopdf'.<br />
<br />
If you're working with multiple architectures, label each one appropriately, either with the architecture suffix (e.g., -amd64) or simply '-dev' and '-prod'.<br />
<br />
Place the one or more binaries in the directory "/bin", right off your root directory (e.g., "#{Rails.root}/bin") and mark them as executable (chmod +x <binary> on linux/mac). Note that for Windows development, you'll have a wkhtmltopdf.exe file for development and probably plain 'ol wkhtmltopdf for production.</binary><br />
<h2><b>Install WickedPDF</b></h2>Installation is straight-forward; just follow the instructions <a href="https://github.com/mileszs/wicked_pdf#readme" target="_blank">here</a>. However, there are some critical changes you must make for generating PDFs successfully. The section titled "Styles" gives you the key:<br />
<div style="text-align: center;"><span style="background-color: white; color: #333333; font-family: Helvetica, arial, freesans, clean, sans-serif; font-size: 16px;"><b><i>You must define absolute paths to CSS files, images, and javascripts</i></b></span></div>I'm highlighting this because I missed it the first time around and it cost me dearly. This section goes on to suggest that you use the helpers that are provided. They are:<br />
<pre class="brush: ruby">wicked_pdf_stylesheet_link_tag
wicked_pdf_image_tag
wicked_pdf_javascript_include_tag </pre><u><i>Every</i></u> external file that you use must be referenced with an absolute address, so the normal Rails helpers do not work in many cases [Technical Note: the wicked_pdf helpers simply embed the actual CSS or Javascript in the page]. To make life easier for me, I did the following:<br />
<ul><li>Create a special pdf.html.haml layout for all PDF pages which uses the wicked_pdf helpers</li>
<li>Use the jQuery CDN for including that set of libraries; the Rails-provided helpers do work here because I'm providing an absolute URL; nothing relative to the Rails root of my application </li>
</ul><pre class="brush: ruby">= javascript_include_tag "http://code.jquery.com/jquery-1.10.0.min.js"
= javascript_include_tag "http://code.jquery.com/ui/1.10.3/jquery-ui.min.js"
</pre><br />
Now here's the strange part [I believe this has to do with the Rails asset pipeline; I'll update this post once I nail that down]. I also had to add all my CSS and JS files to my application.rb file for inclusion in the asset pipeline, thusly:<br />
<pre class="brush: ruby">config.assets.precompile += ['blueprint/screen.css', 'jquery.timepicker.css', ...]
</pre>Only with this incantation did everything work properly.<br />
<h2><b>Configure WickedPDF for Development & Production</b></h2>If you're developing on Linux 64-bit, then configuration is easy as you'll use the same binary for both development and production. Update your WickedPDF initializer file ("config/initializers/wicked_pdf.rb") as follows:<br />
<br />
<pre class="brush: ruby">WickedPdf.config = { :exe_path => "#{Rails.root}/bin/wkhtmltopdf" }</pre>If you're developing on something other than Linux 64-bit, you'll have to test for the environment and set your exe_path appropriately.. something like this:<br />
<pre class="brush: ruby">WickedPdf.config do |config|
if Rails.env == 'production' then
config.exe_path = Rails.root.to_s + "/bin/wkhtmltopdf"
else ### Following allows for development on my MacBook or Linux box
if /darwin/ =~ RUBY_PLATFORM then
config.exe_path = '/usr/local/bin/wkhtmltopdf'
elsif /linux/ =~ RUBY_PLATFORM then
config.exe_path = '/usr/bin/wkhtmltopdf'
else
raise "UnableToLocateWkhtmltopdf"
end
end
end
</pre><br />
For windows, you'll have to check out the RUBY_PLATFORM values you see and modify appropriately; my windows machine reports "i386-mingw32'.<br />
<br />
If you need different versions of the same architecture (say for final testing a new version of wkhtmltopdf before rolling into production), then just label it with the version number and update your WickedPDF configuration file.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com3tag:blogger.com,1999:blog-2958021377042600373.post-66569699505107914292013-06-10T18:55:00.000-07:002013-06-11T05:49:03.717-07:00Chrome / PDF Problem...I've recently been working to add PDF generation to a client site and got <a href="https://github.com/mileszs/wicked_pdf">WickedPDF</a> working. I ran into a problem which I finally resolved (RTFM very carefully)... got some great help from the Wicked folks along the way. In any event, I was finally generating my PDFs fine with Firefox, but Chrome kept failing with the message:<br />
<div style="text-align: left;">
<pre class="brush: shell">Sorry, we were unable to find the document at the original source.
Verify that the document still exists.
You can also try to download the original document by clicking here.</pre>
</div>
All that had to be done was to click on the 'here' link provided, but that's less than an optimal user experience... Not Good, Google! There are quite a few posts out there with the same problem. I finally figured out that this was caused by the Google Docs PDF/PowerPoint Viewer (I was running version 3.10). Easy solution: in Chrome, go to:<br />
<pre class="brush: shell">Customize | Tools | Extensions
</pre>
and disable that extension. Now PDFs are properly displayed.JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com1tag:blogger.com,1999:blog-2958021377042600373.post-72790292232978983462013-05-28T06:59:00.000-07:002013-05-28T06:59:04.885-07:00Fixing a database problem on a Heroku appRecently, I was working on an upgrade to a Heroku app and found to my surprise that my schema_migrations table was wrong in production (horror, actually... I hate this kind of problem because of the possibility of data corruption). The database tables were correct, but the schema was behind by 11 migrations. Suffice it to say I have no idea how that happened, but it must have occurred when I was upgrading from a staging instance to production earlier this month [Note to self: figure out a reliable methodology for doing this].<br />
<br />
There are a number of posts that recommend directly updating the Heroku database; <a href="http://stackoverflow.com/questions/13850681/how-do-i-insert-a-schema-migration-record-on-heroku" target="_blank">this one, for example</a>. I started to take this approach and then realized that there was a safer way to make this change. Heroku has a set of database commands that allow you to copy your production database down to your location machine or copy it back to production. This is better, IMO, than mucking with production data live... even with a backup. Working on your local machine, you can run your tests and generally use all available tools to make sure the modifications work before you upload it back to production.<br />
<br />
The steps are dead simple:<br />
<ol>
<li>Find the id of your production database:<br />
<pre class="brush: shell">heroku config --app [name]</pre>
the id you want is usually something like "HEROKU_POSTGRESQL_PINK_URL"where "PINK" will probably be some other color.</li>
<li>Make a backup of your production database using the database id:<br />
<pre class="brush: shell">heroku pgbackups:capture --app [name] [database id]</pre>
</li>
<li>"Pull" the production database down to your local machine (Heroku drops it into your development database):<br /><pre class="brush: shell">heroku db:pull --app [name]</pre>
</li>
<li>Make your changes</li>
<li>Run your tests</li>
<li>"Push" the modified data back to production:<br /><pre class="brush: shell">heroku db:push --app [name]</pre>
</li>
<li>Open your browser and run a smoke test on your updated database.</li>
</ol>
Mission accomplished.
JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-62880155309636773002013-05-26T15:25:00.002-07:002013-05-28T10:22:00.290-07:00RSpec / Devise / Capybara & feature specs with HTTP AuthenticationI'm working on a couple of sites which use HTTP Authentication and - after much googling and failed attempts - developed a way to deal with that with a set of spec helpers. It all started with <a href="http://gist.github.com/4158961" target="_blank">this gist</a> from Matt Connolly - thanks Matt!<br />
<br />
For the record, I'm developing with the following:<br />
<pre class="brush: shell">ruby 1.9.3p327
rails 3.2.12
devise 2.2.3
capybara 1.1.4
rspec 2.13.0</pre>
<br />
What I wanted was to be able to say:<br />
<pre class="brush: ruby">def before do
login_user()
end</pre>
and have it Just Work.<br />
<br />
In preparation for Capybara 2.0, I'm putting all my integration tests in the spec/features directory and that created some of the confusion. Code in the spec/integration directory has access to the controller, whereas code in the spec/features directory does not. This means that both HTTP Authentication and Devise login must be handled differently. To resolve this, I started with Matt's approach and modified it so that (so far), it works in any of my tests.<br />
<br />
First, I modified Matt's module to add login processing and separate out the HTTP Authentication so that it could be used for controller as well as feature specs. Here's the code:<br />
<br />
<pre class="brush: ruby">## spec/support/auth_helper.rb
module HTTPHelper
def http_config(test_type)
@test_type = test_type
end
def http_login(user,pw)
puts "@test_type: #{@test_type}"
if @test_type == :controller then
request.env['HTTP_AUTHORIZATION'] = ActionController::HttpAuthentication::Basic.encode_credentials(user,pw)
elsif @test_type == :feature then
if page.driver.respond_to?(:basic_auth)
puts 'Responds to basic_auth'
page.driver.basic_auth(user, pw)
elsif page.driver.respond_to?(:basic_authorize)
puts 'Responds to basic_authorize'
page.driver.basic_authorize(user, pw)
elsif page.driver.respond_to?(:browser) && page.driver.browser.respond_to?(:basic_authorize)
puts 'Responds to browser_basic_authorize'
page.driver.browser.basic_authorize(user, pw)
elsif page.driver.respond_to?(:browser) && page.driver.respond_to?(:header)
encoded_login = ["#{user}:#{pw}"].pack("m*")
page.driver.header 'Authorization', "Basic #{encoded_login}"
else
puts "page.driver.methods: #{page.driver.methods.sort}"
if page.driver.respond_to?(:browser) then
puts "page.driver.browser methods: #{page.driver.browser.methods.sort}"
end
raise "I don't know how to log in!"
end
else
raise "I don't know what kind of test this is!"
end
end
end
module AuthHelper
include HTTPHelper
### For controller specs
def login_admin
login_user(:admin)
end
def login_user(user_name=nil)
http_config :controller
http_login('HTTPname', 'HTTPpassword')
if user_name.nil? then
@current_user = FactoryGirl.create :user
@current_user.confirm!
sign_in @current_user
else
raise NotImplementedError
@current_user = FactoryGirl.create :user, :name => :user_name
user = User.where(:name => user_name.to_s).first if user.is_a?(Symbol)
sign_in user.id
end
end
def current_login
User.find(session[:user_id])
end
end
module AuthRequestHelper
include HTTPHelper
### For request, feature & view specs
# pass the @env along with your request, eg:
# GET '/labels', {}, @env
def login_user(user_name=nil)
http_config :feature
http_login('HTTPname','HTTPpassword')
if user_name.nil? then
@current_user = FactoryGirl.create :user
@current_user.confirm!
#Following does not work in feature specs
#sign_in @current_user
visit ('/')
click_on 'Login'
fill_in 'Email', with: @current_user.email
fill_in 'Password', with: 'password'
click_on 'Sign in'
else
raise NotImplementedError
end
end
end
## Relevant portion of spec/spec_helper.rb
...
config.include HTTPHelper
config.include AuthRequestHelper, :type => :request
config.include AuthRequestHelper, :type => :feature
config.include AuthRequestHelper, :type => :view
config.include AuthHelper, :type => :controller
...
## spec/support/devise.rb
RSpec.configure do |config|
config.include Devise::TestHelpers, :type => :controller
config.include Devise::TestHelpers, :type => :view
end
</pre>
Couple things to note: BothHTTP Authentication and Devise sign_in are different between controller and feature specs primarily, I believe, because of Capybara handling in the feature specs (remember, the controller isn't available in feature specs with Capybara see the "Gotchas" near the bottom of the <a href="https://github.com/jnicklas/capybara#readme" target="_blank">Capybara Read Me</a>). Also check out this useful <a href="http://robots.thoughtbot.com/post/33771089985/rspec-integration-tests-with-capybara" target="_blank">post from Thoughtbot</a>.<br />
<br />
Before you say anything... yes, I know it's not very DRY: I could have just included the HTTP Authentication code in each of the different authorization modules. But while working on it, I wanted the code isolated for clarity and I'm just happy to have the thing working. Feel free to clean this up if you want.<br />
<br />
For now I'm not using the login_admin method, but will be working with that down the line.<br />
<br />
Now I can finally get back to the real work of developing my app; hope this helps someone else.<br />
<br />
EDIT: Thanks to the folks at <a href="http://www.thoughtbot.com/" target="_blank">thoughbot</a> for <a href="https://github.com/plataformatec/devise/wiki/How-To:-Test-with-Capybara" target="_blank">this link</a> which describes how to login using <a href="https://github.com/hassox/warden#readme" target="_blank">Warden</a>. I haven't tried it, but it looks promising.<br />
<br />
<br />JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-40933191650781120542013-05-19T08:44:00.000-07:002013-05-19T08:44:09.686-07:00Bash aliases and variables don't mixFor a long time, I've used bash aliases such as:<br />
<pre class="brush: plain">
alias hgrep='history | grep $1'</pre>
and they've worked just fine. However, when I've tried to do something a bit more complicated, such as:<br />
<pre class="brush: plain">
alias hprju='heroku $1 --app project $2'</pre>
it fails with an error message:<br />
<pre class="brush: plain">
! `--app` is not a heroku command.</pre>
Bummer! What I finally realized is that the first example isn't actually replacing $1 with what I specified on the command line. Instead, $1 has NO replacement and my additional text is simply appended to the end of the line. To see how this works, just run the following:<br />
<pre class="brush: plain">
alias test='ls'
test /etc</pre>
and you'll get a directory listing of /etc. After some googling -- with a number of questions along the lines of "Why doesn't my first example work and not my second?" -- I found <a href="http://stackoverflow.com/questions/8734336/why-does-variable-expansion-within-an-alias-work-as-intended-in-only-one-of-th?lq=1" target="_blank">this link</a>, where the 2nd answer had the key. The Bash Reference Manual section on <a href="http://www.gnu.org/software/bash/manual/bashref.html#Aliases" target="_blank">Aliases</a> clearly states:<br />
<blockquote class="tr_bq">
There is <i><b>no mechanism</b></i> [my emphasis] for using arguments in the replacement text, as in csh. If arguments are needed, a shell function should be used (see <a href="http://www.gnu.org/software/bash/manual/bashref.html#Shell-Functions" target="_blank">Shell Functions</a>). </blockquote>
This situation has bugged me off and on for some time and I'm glad to finally understand what's happening. Hope it helps someone else. JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-65678693672813236332013-04-23T15:14:00.001-07:002013-04-23T15:30:45.299-07:00RVM and vim-rails on LinuxI recently installed Tim Pope's <a href="https://github.com/tpope/vim-rails">vim-rails</a> plugin (amazing stuff) but pretty quickly ran into a problem running things like the :Rake or :Rscript commands. I'd get something like the following:<br />
<br />
<pre class="brush: plain">Could not find ZenTest-4.9.1 in any of the sources </pre>
<br />
Checking things from the command line showed that ZenTest was, in fact, properly installed. Finally, the output from a failed :Rake command gave me the clue I needed to resolve this issue:<br />
<br />
<pre class="brush: plain">/home/jseidel/.rvm/gems/ruby-1.9.3-p194@global/gems/bundler-1.1.5/lib/bundler/spec_set.rb
in `block in materialize':
Could not find gherkin-2.12.0 in any of the sources (Bundler::GemNotFound)</pre>
The ruby version vim-rails was using was the default
<br />
<pre class="brush: plain">"/ruby-1.9.3-p194@global"</pre>
instead of the project-specific ruby I had specified of
<br />
<pre class="brush: plain">"/ruby-1.9.3-p327@Rails3212"</pre>
<br />
The problem was that I was starting Vim from the system application launcher which is where it picked up the default values for $GEM_HOME since no project was associated.<br />
<br />
Simple work-around is to start vim from the project command line and now all the environment settings are correct.<br />
<br />
<b><i>EDIT-1</i></b>: Just found <a href="https://github.com/tpope/vim-rvm">vim-rvm</a>, again by Tim Pope, which takes care of all of this and then some.<br />
<br />
<b><i>EDIT-2</i></b>: To get things to work together properly (I'm on RVM 1.19.6 stable and the latest vim-rvm and rim-rails), you have to convert your .rvmrc file to a .ruby-version file using the command:<br />
<br />
<pre class="brush: bash">rvm rvmrc to [.]ruby-version</pre>
JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-32334976839135559032013-02-23T15:21:00.002-08:002013-02-23T15:21:27.927-08:00Asset pipeline flowing smoothly...Finished my testing with the changes for an addition to my existing app and decided it was time to get this into production. Only hitch was that my app was running on bamboo and you need to be on cedar if you want to implement the asset pipeline. The Heroku folks have a good introduction to how to do this <a href="https://devcenter.heroku.com/articles/cedar">here</a> and <a href="https://devcenter.heroku.com/articles/rails3x-asset-pipeline-cedar">here</a>. Well laid out and gave me everything I needed to create a new, test app to deploy to.<br />
<br />
First problem I ran into was Heroku complaining that it couldn't install sqlite3. Now this is a well-documented problem which requires that you make sure that your sqlite gem is only available in test and development but not in production as Heroku no longer supports that. It's a simple fix:<br />
<br />
<pre class="brush: ruby"> gem sqlite3, :group => [:development, :test] </pre>
<br />
I dutifully made the change and pushed to my test app but it continued to fail. After an embarrasingly long time, I remembered that I was working on branch 'cedar' (isolating these changes) but the push was going to 'master'. This was a quick fix:<br />
<br />
<pre class="brush: plain">git push test-app HEAD:master</pre>
<br />
pushes the current HEAD to master on Heroku... problem solved.<br />
<br />
Then I ran into a further complication; Bundler worked fine but the Heroku assets:procompile was failing with<br />
<br />
<pre class="brush: plain">could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port xxxx?</pre>
<br />
This was a simple solution, found in the previous reference in the 'Troubleshooting' section; just add this to my application.rb file:
<br />
<pre class="brush: ruby">config.assets.initialize_on_precompile = false</pre>
Now I could run my migrations, install the sendgrid addon, and my test-app was up and running. Hopefully this can save someone a bit of time.
JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0tag:blogger.com,1999:blog-2958021377042600373.post-41564610636477662402013-02-18T04:50:00.003-08:002013-02-18T05:01:59.681-08:00Turbulence in the asset pipelineI was running 3.0.20 for a small website and decided it was time to upgrade to the asset pipeline. I didn't require it for this application, but figured it would be good to get some experience. I also wanted to set the stage for using Twitter Bootstrap. So I figured this past weekend was a good time.<br />
<br />
Upgrading to Rails 3.2 was a cake-walk; upgrading to the asset pipeline was another matter. Here's what happened to me in the hopes that it will help other folks.<br />
<br />
To do the upgrade I:<br />
<br />
<ol>
<li>Ran my tests in current app - all green - and made last-minute commits & push to master.</li>
<li>Created a new branch (rails32) so that I could retreat to safety if needed (I was about ready to toss the whole thing at one point).</li>
<li>Went to Ryan Bates' Rails Casts. First episode: <a href="http://railscasts.com/episodes/318-upgrading-to-rails-3-2?view=asciicast">Upgrading to Rails 3.2</a> followed by <a href="http://railscasts.com/episodes/282-upgrading-to-rails-3-1?view=asciicast">Upgrading to Rails 3.1</a>. Great stuff! Rails 3.2 without the asset pipeline was up and running in about an hour and worked like a champ: all tests still green.</li>
<li>On to the asset pipeline and another Rails Cast: <a href="http://asciicasts.com/episodes/279-understanding-the-asset-pipeline">Understanding the Asset Pipeline</a>. This took a bit more time what with moving things around, but another couple hours or so and ready for testing.</li>
<li>Ran my tests in the new app and still green so it was all looking good until I displayed the app live - that was an awful mess:</li>
<ol>
<li>Navigation didn't display in the correct location... it was showing up in code sequence rather than being re-positioned to the top of the page</li>
<li>The order of the navigation elements was reversed as was the body display: the left-most columns were displayed on the right, as if someone had done a "reverse" on all the data</li>
<li>The link elements were displaying not only the display text but also the link itself. The link_to helper was working fine and the a elements were perfect... it was just the display that was a mess.</li>
<li>Text elements were overlapping each other in areas.</li>
</ol>
</ol>
After many hours of Googling for answers and invoking the help of my friends on the<a href="http://www.meetup.com/ebruby"> East Bay Ruby Meetup</a> mailing list, and a key post from Billy, I found the answer... actually two of them.<br />
<br />
<ol>
<li>I had to explicitly specify the order of my javascript and css files in the manifest files: application.js and application.css. This meant I stopped using the "//= require_tree ." specification and hard-coded the files I wanted in the order that they had been previously specified. Not only did it get the order right but it also excluded certain elements that were present in the tree that I hadn't been using, such as my jasmine_examples and the myriad of optional blueprint files. That fixed almost everything.</li>
<li>Remaining was the bad display of the a elements. I started removing CSS files one at a time and struck pay dirt on the first one: blueprint/print.css. I've been using this one all along but in the new configuration it was causing the problem. As soon as I removed this file from the mix, I was back to my old display and running the asset pipeline. Whew! I don't know why this code was suddenly causing a problem but I'll deal with that later.</li>
</ol>
So... what did I learn so that hopefully I don't make the same mistake next time? YAGNI<br />
<br />
<ul>
<li>Beware of global options (i.e., //= require_tree .) when upgrading an existing rails app. It may be great for a new app where there's very little to be included, but the cruft that seems to always collect can come back and bite you.</li>
<li>As always, take a minimalist approach: include only what you really need and leave the rest for the future</li>
<li>Consider adding some tests that would identify this kind of display-centric issue. Not exactly sure how to handle this, but I'll probably at least add a negative test so that if this kind of error shows up again my tests will fail.</li>
</ul>
There's still the question of installing this in production. I'm running on Heroku and I understand that are some issues to be dealt with there but that'll have to wait for another day.<br />
<br />JESiihttp://www.blogger.com/profile/03694926683464859622noreply@blogger.com0