How to fix 404 errors in Laravel application authenication

Introduction

I am not a Laravel programmer by any stretch of imagination but I like to think that I am a pretty good engineer who can solve problems regardless of the situation. Recently I faced a situation where I was working with one of my clients on setting up a validated environment for their customers using a standard operating procedure (SOP). They had gotten the application developed from a third party vendor. Long story short, since their target market is highly regulated industry, they must have detailed and exhaustive documentation of all steps of setting up and tearing down of the computer systems.

It's not easy to write SOPs

There is a video where a father is asking his kids for some instructions to make a peanut butter and jelly sandwich. While for most of us making a peanut butter and jelly sandwich is highly intuitive and easy, for a person who doesn't know what is to be done, it can be stupefying. When we write instructions, we know a lot of things as we have already done the task successfully and assume that the reader is already aware of a lot of things that we know about and do not include those steps in our documentation.
Sometimes it is different. We miss some steps in our documentation and when someone else finds out and reports the gaps, we quickly make the change and do not update the documentation, hoping the problem will not recur. That exacerbates the actual problem. In case someone else has to reproduce the environment using those SOPs then they are basically stuck.
Therefore it is very important to have accurate and correct documentation updated. Admittedly, it is tedious and cumbersome, but in the long run it definitely is the right thing to do.

Laravel and Apache server

I had a Laravel application that was front ended by an Apache server and the backend was MySQL database. The developer had setup an instance and my job was to replicate the same using the SOP that they had built to ensure they are accurate and the process is predictable and repeatable. I followed the steps exactly as documented. I was able to get to the login screen. So far so good. But when I tried to log in using the test credentials, I was shown a 404 - Page not found error. Initially, I worked with the developer and they did something and fixed it. Interestingly, they failed to mention it to me what they did and neither was it documented in the SOP. So as it happened, I had to rebuild the server to try and get the SOP working. I again faced that issue and this time, instead of reaching out to the developer, I did my own research and Voila' I was able to solve the problem.
The research led me far and wide and in one of the forums I found the possible answer which suggested setting AllowOverride All on the parent directory of the application. In my case it was /var/www. The post suggested to set it at the main Apache configuration file. But I always try to localize these solutions and not make global changes if I can help it in order to keep the installation secure. So I added a Directory directive in the VHost configuration file which enables AllowOverride All for that VHost only while leaving the original configuration untouched.

Conclusion

They say that Laravel is a programming language for coding artists with elegant frameworks. But these issues are pretty simple and will be prevalent in all applications that require authentication. Why is this not better documented? More importantly why does the authenticate or login action in Laravel return a 404 error?
I wonder...

Bring your terminal to life with Zsh

Introduction

ZshI have been using Linux for a better part of two decades now but I won't say that I am an expert in Linux. Till recently I didn't even know that I could change shells and what capabilities other shells have. I was more of a functional and casual user than anything else who knew how to get a few things done but didn't know the internals and the intricacies of the operating system. One of the intricacies is to change the default Linux shell from bash to zsh. Zsh is indeed a powerful shell with a lot of extensions, customizations, and community support with themes to tailor the shell to ones own preferences. That was another thing I didn't know that there are themes for shell. I always used to think that bash shell is what you have and you have to use it without complaints.

Well, I was wrong and there are a lot of things you can change about your terminal and in this blog post, I am going to show you how to do it. I know there are a lot of tutorials around the web that show you exactly this and some are good than others. But this is more for me than anybody else. Really. When I started working for AWS, I got an opportunity to meet very smart people who have extended the capabilities of software tools to the limit of what they can do and then some. That is one of the reasons why I wanted to learn more and more about the basics so that I can have a very solid foundation.

Why Zsh?

By installing a different shell, you can actually decorate and customize your plain old command prompt to something fancy very easily and you have a lot of options. Of course, you can also customize and decorate your command prompt on bash shell, but for that you need to know a lot of configurations and parameters if not all. And you need to build the configuration file by hand as well.

As I mentioned earlier, there is a vibrant and strong community support and there are several themes available along with pre-made scripts that you can just run and choose options to configure and tailor. That's what I plan to show you in this post.

Preparation

All the Linux installations come with a bash shell by default. Depending on the distribution you use, there might be other shells also installed but we need to ensure that Zsh is installed. I am using Ubuntu to show the commands. The steps remain the same on a Red Hat clone as well. Red Hat clones use yum to install and maintain software whereas Debian and its clones use apt.

  • Ensure you have Zsh installed. Run the following command.
    sudo apt install zsh -y
  • Check if the Zsh installation is done properly and where it is installed by running the following command.
    which zsh
    It should show an output something like /usr/bin/zsh
  • Now that Zsh is installed, change your default shell to zsh by running the following command.
    chsh -s $(which zsh)

Once the command runs successfully, restart your session in order for the terminal to start in the new shell environment. Once the terminal starts, the Zsh configuration scripts starts automatically to enable the user to configure settings and preferences. We are not going to do any configurations at this time and will leave this for later once all the components are installed and ready. On the prompt, select 0 to Exit and create a placeholder .zshrc file so that the script doesn't run again.
(0) Exit, creating the file ~/.zshrc containing just a comment.
That will prevent this function being run again.

Now we are getting to the good part. First of all we need to install and download Oh-my-zsh. Oh-My-Zsh is a delightful, open source, community-driven framework for managing your ZSH configuration. Download and run it using the either of the commands below. Generally all Linux installations come with wget preinstalled and you will have to install curl. Curl is a powerful tool for testing and debugging and if you don't have it, don't worry about it and don't bother downloading it. If you don't have it and don't know about it, you probably won't need it.

sh -c "$(wget https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
==OR==
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

Next we need to make the terminal our own by installing and configuring themes. There are a lot of themes available for Zsh but my favorite is PowerLevel10K. To use it first clone the Powerlevel10K repo using git. If you don't have git, install it by running sudo apt install git -y on your terminal.

git clone https://github.com/romkatv/powerlevel10k.git $ZSH_CUSTOM/themes/powerlevel10k

Once this is done, we will clone auto-suggestions and syntax highlighting repos which are plugins which extend the capabilities of Zsh a lot. Again there a lot of plugins available for Oh-my-zsh but for now we will concentrate on these two.

git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions


git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting

Now that all the major components are downloaded and ready, it's time to personalize, configure and make the terminal prompt our own. Open .zshrc (It's located in your home folder) file in your favorite editor and add the following linesZSH_THEME="powerlevel10k/powerlevel10k"
POWERLEVEL9K_MODE="nerdfont-complete"

Uncomment the options to activate them as needed. I suggest the following options.
CASE_SENSITIVE="true" # Around line 21
ENABLE_CORRECTION="true" # Around line 45

Finally add the plugins that we cloned earlier for autosuggestions and syntax-highlighting at around line 71
plugins=(git zsh-autosuggestions zsh-syntax-highlighting sudo)

Once done, save and exit. We are not done yet. We need to install a font that will allow us to use cool fonts and icons to spice up the prompt. The font that we need is Fira mono font. While the font itself is free from Google, we need the patched version. The entire fonts library is available on Github. The fonts library is a huge one and you don't really need all of those. Just download the medium regular complete font and that will be sufficient. To install it, you can either double-click on the font file and open it in File manager and click install or create a .fonts directory in your home folder and copy it there. I recommend the first option to install it system wide.

Once installed configure Terminal app preferences --> Profile --> Text to use Fira mono font to harness the power of Oh-my-zsh

At this point you need to log out and log back in to ensure the changes made so far are persistent.

After logging back in launch Terminal. The P10K configure will run for the first time. Follow the prompts to complete the set. My suggestions for the configuration are:

  • For Prompt Style choose (3) Rainbow
  • For Character Set choose (1) Unicode
  • Choose whether you want to show current time on the command prompt or not. I suggest (3) 12 hour format
  • For prompt separators choose (1) Angled
  • For prompt heads choose (1) Sharp
  • For prompt tails choose (1) Flat
  • For prompt height choose (1) One line
  • For prompt spacing choose (1) Compact
  • For Icons you can choose your preference. I like a lot of fonts and I recommend (2) Many icons
  • For Prompt flow choose (1) Concise. Again this is a personal choice. I recommend Concise
    • Choose (y) to select Transient Prompt to provide a clean work area
  • Choose (2) to have Quiet Instant prompt mode
    Choose to save to .zshrc and apply the changes we just made.

And that's it. You are done! You now have a fantastic command prompt.

Preparing for AWS Certified Solutions Architect – Professional

Introduction

They say that AWS Certified Solutions Architect - Professional is one of the most AWS SA Pro Badgedifficult AWS certification exams. I am very excited that I passed it in the first attempt. To tell the truth, I was extremely nervous about the exam. You see, till I joined AWS in March 2019, I didn't have any experience or even exposure to AWS. I knew about cloud and what it is and what it can do for businesses of course, but didn't really know about the technologies, the architecture, the scope and depth of AWS. From there to now when I am a Certified Solutions Architect for AWS at a professional level is huge. Even if I say so myself. So how did I get here? Well, that's the story I am going to tell here. I promise I won't bore you with any gory details of what the technology is and how I spent hours and hours studying and reading (which incidentally I did), but this will be quick recap of my experience and what I found useful and what to ignore.

The Prep

It took me about 6 months exactly to date to take this test from the time I appeared for my previous exam. Not all the time was spent on studying and preparing for the exam. There were times when I got derailed and didn't quite study for days at end. But overall I anticipate it would take somewhere in the range of 3-4 months at a minimum to really prepare well for this exam. I initially started with the SA Pro course on Linux Academy as I had really good experience with it during the SA Associate exam and the course is really in-depth with the trainer - Adrian Cantrill going deep technically and provides a lot of hands-on demonstration and labs along with a lot of links to product documentation which is immensely helpful to read and understand the service and product in detail.. And it was definitely helpful to be familiar with the screens even though they were outdated. And that was the problem with that course. It was made in early 2019 and for a myriad of reasons, it has not been updated and now that A Cloud Guru has acquired Linux Academy, ACG is pushing students to use their course. And since it was slightly more updated and recent, I completed that course as well. It has nowhere near the depth and detail of the LA course, but the trainer - Scott Pletcher does a fantastic job of explaining the concepts and preparing the student for the real world scenarios and walks through the questions step by step which was helpful. More importantly, he provides a lot of links of re:Invent videos and links to white papers which are crucial to read before even considering to take the exam.

Reading

There are so many whitepapers on the AWS site that you can spend all your time reading those for several months and not get through it. So you need to be very judicious in choosing which ones to read. My recommendation is to read the following ones without fail. There is also some recommendations on the AWS training site which provides some guidance on preparing for the exam.

Although there are a few more you should read, but these are the absolute minimum.

The FAQs on the AWS site are also a very valuable resource and it has a lot of information. FAQs have been vilified by its overuse and nonsense FAQs that some companies put on their sites. But the FAQs that AWS has put together are really very high quality and it provides great overview of the service and its boundaries and capabilities. Spend some time going through as much as you can but at a minimum review the following:

The next resource is the product documentation. Product documentation can be very long and boring and at times confusing. But the level of detail it provides is just amazing. You may not have to read the developer guide cover to cover, but figure out which areas are a little weak for you and invest time in reading that section of the documentation. Regardless of your comfort level, one of the most important documentation you can read is AWS Organizations. It is really useful. But consider reading some of the documentation from the list below.

Watching

Youtube is a great resource if you use it well. AWS has created its own channel and they post re:Invent videos on that channel and a lot of tech talks that they host. They are great and can provide 300-400 level information. I strongly recommend that you spend some time watching the re:Invent videos. Make sure to watch the re:Invent videos that are at least 6 months old as services and features announced within six months will not appear on the exam. I recommend watching the videos for:

Practice

Finally, there is no substitute to practice. Take as many practice tests as you can. Whizlabs has a lot of practice exams. Although I found the questions to be oriented more toward development and some of the answers were flat out wrong, but it helped clarify things quite a bit and forced me to read product documentation which otherwise I would not have read.

Exam day

Finally on the exam day, take plenty of rest and calm your mind down. If you are into meditation, then I would recommend meditate for some time before going to the exam. It really helps with the concentration as the exam is really long at 180 minutes and 75 questions and it can be very intense. If not, then find the way in which you groove the best and go for it. In today's world sitting in one place without having any gadgets and / or notifications is not very easy. A couple of hours before the exam, avoid drinking too many fluids as there are no breaks in between and if you need to use the rest room, the exam timer continues to wind down. So plan accordingly.

Reading long questions and longer answers is very tedious and it is very easy to miss an important point. One of the strategies in reading long answers is to read them vertically. While the technical definition of vertical reading is very different, I am using vertical reading to mean that you scan the text of the answers vertically and try to identify the salient points that differ and would make sense in that scenario. In several situations, the answers are very similar with only one or two things different in each answer. Focus on that. Choose the one that BEST suits the situation asked in the question. Reading horizontally can tire you out and it is very easy to get lost in the word jungle and miss the right answer.

With all this preparation, it is possible that during the exam you might feel that the exam is easy and you may lose concentration and focus. The trick is to take few very short breaks at a strategic intervals to keep focus and stay sharp. The exam center provides you with a piece of paper and pencil. Use it to draw architecture diagrams to visualize the question and also write down your thoughts. It helps.

Final thoughts

With all these preparation, I am certain you will pass the exam easily and with flying colors. AWS certified solutions architect professional is a very valuable certification which not only will enhance your career prospects, but also help in increasing and improving your understanding of the cloud and highly scalable, resilient and highly available architectures.

All the best!

How to unblock file sharing on Windows

Have you ever come across a situation where you want to access another computer via itsFirewall Options UNC (\\computer-name\share) and came across a "Network Path not found" error? Although not too uncommon, this errors can puzzle you in cases where you know all the services required for file sharing are running. I faced this issue the other day and finally after a bit of tussle, I managed to solve the problem. We were trying to access a computer at one of our remote locations and it was giving this error. What made this even more puzzling was that local users were able to access the same computer by using the UNC path. We tried all we could and knew but still the error was the same.

Then I searched on the internet and found some references and a very useful knowledge base article (KB article number 840634) on Microsoft support website which addressed this issue. Apparently somebody had enabled the firewall and the firewall is designed to block File and Printer Sharing from any network and allow access only in the same subnet in its default configuration.

To allow file and print sharing on your computers from any network, follow the steps described below:

  1. Click on Start --> Settings --> Control Panel. If you are using the Windows XP Start Menu, you can find the Control Panel on the right side of the start menu.
  2. Category view: In the category view, click on the Security center and then click on Windows Firewall to open the Windows Firewall configuration window. In the Classic view, you will see the Windows Firewall icon.
  3. Ensure that the Do not Allow Exceptions check box is not checked on the General tab
  4. Click on Exceptions tab and highlight File and Printer Sharing and click on Edit...to modify the settings. The file and printer sharing is done through port number 445.Enable Port 445
  5. Highlight the TCP 445 and click on Change scope... or just double click to edit it. In the resulting screen, select whether you want to enable for any network for just your network or for a custom list of networks. In case you choose custom list of networks, you will have to manually enter each subnet you want to enable for it to work.
  6. Click OK your way out and now you should be able to access the computer(s) over the network using UNC path.

A word of caution: I don't recommend setting the scope to Any as is shown here. It means that anyone who can reach your network will be able to access the computer over the network. Either select just the subnet you are on or explicitly define the subnets that you know and trust that would be required to connect remotely using UNC path.

Quality issues with Apple Watch

iPod 5th Generation

Background

I have been a big fan of Apple devices for a really long time. Right from the iPod 4th Generation (I still have it and it still works even after 14 years) with Video to iPhone 11 Pro. Over the period of time I have used a lot of devices from Apple and swear by the quality, reliability and durability of Apple products. But sadly, nowadays the quality of the products has been declining quite a bit. Apple's image even took a beating after the Antenna-gate, Bendgate and Battery-gate issues. But it weathered the storm and kept bringing great products. Although they may not have been bleeding edge or latest technology or even the most innovative, they were high quality products. But now it seems there are quality issues with Apple Watch.

The issue

I'm afraid that there may be another scandal or issue brewing with its Apple Watch line of products. It could be called as Watchgate or Screengate. This is an issue with the Apple watch where the screen just pops off.
Apple watch with popped screen
Though by far, I am not the only one facing this issue as there are threads on Apple community discusions which you can read here and here. In fact one of my friends also faced this issue. It is a known issue and Apple has also acknowledged it as it is now a separate category on Apple's support page.
Apple support options
I faced this issue with Apple Watch Series 3 twice in the last 18 months. First time around it was within warranty and Apple quietly replaced the watch for me. But now when the screen popped the watch was out of warranty and they refused to repair it without me paying a ridiculous $159 repair fee.

Frustrated!!!

I was left with a watch whose screen had popped off. The poor glue was trying in vain to keep the assembly in place. Paying $159 for repair of a watch which I expected to again face the same issue didn't seem worth it. So I kind of resigned to not use my Apple Watch again. My wife even suggested getting a Fitbit so that we could be a Fitbit family. I was really frustrated and I fired off an email to Tim Cook the text of which is below:


Hello Mr. Cook,

I have been a lifelong Apple user starting from iPod 3rd generation and the successive products. In fact i have never used any other smart phone in my entire life. It was very exciting and great feeling to buy my Apple watch 3 around 1.5 years ago but since then my experience with the renowned Apple quality has not been what I have come to expect. The screen has a tendency to come off very now and then and it appears that the glue used is of low quality. I have been using the watch as per Apple's recommendations and have never even once taken it in water.

I had it repaired once last year and it was replaced for me. Now the new watch has the same problem in less than 9-10 months and now I have been asked to pay $159 for repairing the watch which doesn't seem right to me. It clearly is either a design or a manufacturing defect resulting from usage of low quality components in the manufacturing process.

I have worked at Apple for a few years and I know the rigorous demands on quality that Apple has and working at Apple has shaped me and my career in a positive manner.

I just hope you will take the time to read this email and take the appropriate action. I really love the Apple Watch and feel really bad to stop using it because of the issue.


Tim Cook is a busy man and he is preparing for Apple's event tomorrow on Oct 13th. But I believe he should address customer issues on priority.

Viola, the solution

Anyway, I slept over the issue and in the morning had a brain wave. What if I glue the screen with super glue? I always have some kind of all purpose super glue at home and this time around I had Gorilla Glue.Gorilla Glue

So the first thing I did in the morning was to apply a little bit of Gorilla glue around the edge and put the screen back. I took care to not damage or displace the connector which connects the screen to the body. I factory reset the watch several times in the day (I don't know why it didn't start up immediately), but finally by the end of the day it started working again and now I am very happy and proud to say that I fixed the Apple Watch myself without paying the ridiculous amount as repair charges and hopefully the watch is good for another couple of years.

How to create a bootable USB drive on MacBook

Introduction

I like to tinker around with technology. I think that much will be evident from my website and the type of posts that I write here. Sometime back, I was trying to play around with a Raspberry Pi. It was a RPi zero so it didn't have a lot of capabilities, but I figured out that I could run Raspbian Buster or Debian Buster on it and also ran this website on it till recently when I migrated it to AWS. While playing around with RPi zero, I discovered and created a bootable usb drive on Mac using diskutil and dd. The same process also work for creating a bootable SD card.

The problem

For quite some time now, I have been using a MacBook as my primary computer and while it is a great machine for personal productivity and development, I didn't really dive deep on system administration. I needed to figure out how to format a USB drive and a SD card on Mac and write a bootable image on it.

I did a lot of research and I came across a site from a Microsoft engineer who had written a very nice article on this and I used that article to very easily achieve my task. But I can't find that anymore so instead of relying on someone else, I thought I will document it myself and also add some additional details so that others can benefit. I could have very easily used my Windows computer but that wouldn't be integral with my tinkerer nature.

Recently, I had revived an old 2009 laptop which refused to run any of newer OSes so I did another research on possible OSes that could run on it and figured that I could run Lubuntu on it easily. So I went ahead and downloaded the latest version of Lubuntu - Focal Fossa and set to the task of creating a bootable USB drive.

The technical details

To achieve this, I need only two tools from my MacBook

  1. diskutil
  2. dd

Let's take a look at the details now.

The first thing that needs to be done is to determine the device details of the USB drive. To do that, first insert the drive in your USB port and run the command below:

diskutil list

List all devices using diskutil

This command will show an output of the disks mounted. Determine the device details by looking at the disk size and note down the device details which will be in the form /dev/diskN where N is a number. Once the device number is determined, run the following command under root (sudo) privileges

sudo diskutil eraseDisk FAT32 LABEL MBRFormat /dev/diskN

Erase disk using diskutil

Make sure to replace the LABEL with the name you want and N with the number noted above.

It will take a few minutes to complete the process and once it is complete, run the following command:

diskutil unmountDisk /dev/diskN

Again taking care to replace N with the appropriate number.

Once the disk is unmounted, we are now ready to write the bootable image to the USB drive. To do so, run the following command:

sudo dd bs=1m if=/Path/to/fileimage.iso of=/dev/diskN

Create bootable disk using DD

Depending on the size of the image, this can take several minutes. You can check the progress of the process by pressing Ctrl+T on the screen. Once finished, run the following command to eject the disk from the computer gracefully.

diskutil eject /dev/diskN

Don't forget to replace the N!

Even more deeper details

Now that you understand the commands, let's take a detailed look at the verbs and the switches we used in the commands.

  • diskutil: We used the following verbs with this command.
    • list: This option lists all the drives that are attached and mounted on the operating system
    • eraseDisk (note the capital D): This option will erase the disk that is provided as an option. It also takes the following arguments:
      • Filesystem Type: Choose from FAT32, NTFS, EXT4 etc.
      • Label: The name to be given to the disk
      • Format: The format type of the disk. Valid Values are: APM (Apple Partition map), GPT (GUID Partition Table) and MBR (Master Boot record). Using MBR will ensure that the drive will be bootable on non-Mac machines as well.
      • Device: The device number that we noted earlier.
        Note: This option needs sudo or root privileges to run.
    • unmountDisk: This will unmount the entire disk including all the volumes that may be present on the disk. It needs the device argument to work.
    • eject: This will eject the disk from the computer and make it safe for the removable media to be removed from the computer without the risk of data corruption.
  • dd: dd stands for data duplicator and is used to copy and transform data from one device to another. It is a low level Linux command line utility which will be a great addition in any system administrator's toolkit. We used the following verbs and switches in this exercise:
    • bs: Stands for block size. The default block size for dd utility is 512 bytes and there's not one right size for setting a block size. There is a good discussion here. This operand sets both the input block size and output block size to the desired value which I have used as 1 mb.
    • if: Denotes the input file where the dd should read from instead of standard input.
    • of: Denotes the destination where dd should write to instead of standard output.

Conclusion:

That's it for now. I hope this short tutorial has been helpful to you. Instead of using the GUI tools, I have found that using these command line utilities provide a lot of flexibility and power to the system administrator but can be confusing at times and have potential to destroy data if used incorrectly.

Updating a DynamoDB attribute with a hyphen or dash (-) in the name using CLI or SDK

Background

As a part of my personal growth plan and work commitments, I am working on the AWS Certified Developer - Associate certification using the Linux Academy platform. In one of the lab exercises that I was doing on DynamoDB, there were requirements for updating DynamoDB attribute using SDK and perform conditional updates and atomic counters on the tables. Being what I am, I did not use the examples they had provided, but created by own table to create a database of books I own and proceeded to create my own attribute names for the items.

The problem

As it happened, I created attributes like book-title, book-author, book-price, etc. which in itself is not a problem. However, the lab exercise had me perform the item updates using the BOTO3 Python SDK which got me excited to learn new things. I used the example files that the trainer had provided and modified it to suit my environment and ran the script.

UpdateExpression='SET book-price = :val',
ExpressionAttributeValues={
    ':val': {'N': '15.37'},  
    ':currval': {'N': '0'} 
},
ConditionExpression='book-price = :currval',
ReturnValues="ALL_NEW"

To my dismay, I started encountering errors.

Traceback (most recent call last):
  File "conditional_write.py", line 18, in 
    ReturnValues="ALL_NEW"
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 626, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the UpdateItem operation: Invalid UpdateExpression: Syntax error; token: "-", near: "book-price"

The Solution

I reviewed my code to ensure that I had not introduced any bugs myself. After ensuring that I had not introduced any bugs by adding new attributes to an item without any dashes and running the script successfully, I starting practicing my Google-Fu. There I found this awesome post on stackoverflow along with a link to official AWS documentation. The official documentation however only talks about a dot being a special character and it doesn't list a dash (-). After following the instructions from the stackoverflow post, my new code looked like this:

UpdateExpression='SET #bp = :val',
ExpressionAttributeValues={
    ':val': {'N': '15.37'},  # Make sure we keep this line the same
    ':currval': {'N': '0'}  # What was the current value?
},
ExpressionAttributeNames={
    "#bp": "book-price"
    },
ConditionExpression='#bp = :currval',
ReturnValues="ALL_NEW"

And once I implemented this code it all started working correctly. I have left a feedback for the AWS documentation team and hopefully they will update the documentation. I just want to make sure that all the cases are at listed and documented so that developers and wannabes like me are not stuck.

Elections in the new world

Context:

A long running episode has just turned an important page right now. Robert Mueller finally testified in front of the congress and as I expected provided almost nothing to the lawmakers outside his report. The focus of media and most of the public was around collusion and obstruction of justice. Indeed, that was the most newsworthy story but in my opinion not the main story or threat to the democracies of the world. It was only Rep. Adam Schiff brought out the question of integrity and security of elections. Director Mueller had already highlighted it in his monologue of a press conference in May 2019. How will the elections look like in the new world?

Subtext:

It is an important aspect that all the democratically elected governments of the

Elections
Elections in India

world should be really worried about. In fact there are questions being asked of the validity of the Brexit referendum vote and even some of the assembly Elections results in India. Now defunct Cambridge Analytica is being suspected as being involved and even instrumental in altering the outcome of both the results.

Just imagine if the Pakistani intelligence agency ISI decides to engage itself in Indian politics. It can ensure a party that is sympathetic towards Pakistan comes to power. Or even worse, it can ensure that an incompetent leader becomes the prime minister of India. That would be a disaster not only for India but to the stability of the region and I daresay, even the world. I can't think of India being ruled by Congress party led by an inept leader like Rahul Gandhi.

With the world becoming more and more digital and online, governments of the world should take infinitely more care about ensuring the data security and integrity to ensure fair and correct results. We all see in day to day life how easy it is to hack any computer system and bring it down. The private companies of the world realize it and spend a fortune on securing their IT infrastructure. The governments also should realize it. The bureaucrats must eliminate of reduce bureaucracy to a large extent and actually care about the security and integrity of the election process and the integrity of the results.

Conclusion:

It is very easy to ensure the security of elections in the new world if you think about it. First of all, Government must appoint competent people to key positions with reasonable autonomy to perform their function. As a result of strong and fair oversight, it will ensure that the right policies and procedures are implemented. Politicians must be kept at more than an arms length from the entire process. State of the art technology should be implemented. Most importantly, the people involved in the process at the grass roots level should be provided training and right incentives.

This is the just the starting point. But we don't have a lot of time to get it right. The bad actors are already off the blocks and the race is on!!!

A “Brave” new browser (?)

While watching the recently concluded 2019 cricket world cup, I saw some ads for Alluva, which calls itself a prediction platform. I am not sure how it works, but that's not the point of this article. I signed up to Alluva and it had me create an account on MetaMask, to receive the Alluva tokens. There on MetaMask site, it was strongly encouraging using a new browser called Brave.

Get Brave!

The browser in itself if based on Chromium project and they state that they have "taken almost all of Google from the Chrome."

I was intrigued. I am not the one to shy away from testing out new technologies. So I decided to take it for a spin. I downloaded it and took it for a spin. The first few sites all worked fine as the browser's core code base is Chrome itself. But the moment I tried to connect to my corporate sites, it started acting up. I faced two main issues while browsing:

  1. For any SSO enabled site, it started asking for username and password instead of taking the authentication from the kerberos ticket.
  2. For any SAML federation redirects, the redirects just failed and the site failed to work.

These issues were a deal breaker for me. For all the technology evangelism I just can't see myself using two browsers for my needs. I needed to have one browser. I was about to give up and go back to tried and tested Firefox. But I refused to give up. I asked myself, if Chrome works, then why not Brave? What is different in Brave that is causing the issue. I found the answer in one of the feature request on GitHub and a Brave Community post. Looks like when the browser code was compiled the developers disabled a couple of flags that are needed for SSO integration with kerberos and SAML redirects.

  • --auth-server-whitelist
  • --auth-negotiate-delegate-whitelist

When I tried to run the browser by running from command line and passing correct arguments for these parameters, everything worked fine. But again it is not very easy to always run it from the command line and all your settings are lost. So I was looking for an answer to make the process automatic and repeatable. I searched a lot of forums and help sites and I found the answer on superuser.com. This gives a step by step explanation of how to configure command line parameters for any application.

I tried both methods, and finally settled on the second method as the best method.

I created a small application using MacOS automator. It worked well. But I always had to launch the application from wherever I had saved it. Launching it from the Dock instead of from the actual location even after pinning it to the Dock defaulted to the original application launcher. The second method modifies the application bundle so it is a little risky but with enough due diligence and case, you can do it.

You can download the brave browser by clicking here.

How to setup use NexxHome garage door opener with Google Home

After a lot of discussions and false starts, we finally took the plunge in making our home a smart home starting with smart speakers and thermostats. After we moved into our own home, we were always kind of worried about the garage door and once or twice we have left it open only for our neighbors to call us and alert us about it.

We were not sure about how to handle it when a couple of our friends told us about the smart garage door openers that operate over WiFi and are accessible over the internet from anywhere. We did some research and based on the reviews and feedback from our friends, we decided to go for the NexxHome smart garage door opener. Based on the information provided on the product page on Amazon and on the NexxHome website, the device works with Alexa as well as Google Home. While I was able to find a lot of sites that showed how it works with Alexa, I was not able to find any tutorials on how to link and enable NexxHome with Google Home. The device doesn't even come up on Google Home App when I try to add a device.

I was disappointed and stumped. But then, I found this document on the NexxHome support page that kind of gave me a direction to pursue.

Integrating NexxHome with Google Home is not very straight forward. There is a roundabout way of doing it. Before linking Nexxhome with Google Home, the Nexxhome App needs to be prepped up a little bit.

  1. Open your NexxHome App and tap on Setting (gear icon)
  2. In the next screen, tap on "Works With" menu and enable Google Assistant
  3. Once that is done, open your Google Home App and click on the user icon
  4. Tap on Explore and in the search window, type Nexx Home and select on the Nexx Home result
  5. In the next window, click on Link to link your NexxHome with Google assistant and enter your NexxHome credentials to login when prompted.
  6. Once you login, the Nexx will be linked and it will show Try it button to try the commands.

That's it. This successfully links NexxHome with your Google Home and Google assistant.

Note: Although it links successfully, the linkage is not very reliable and you may not get the desired results every time. However, for me, it has worked as expected 8 out of 10 times. Hope this helps others as well.