Posts

Showing posts from April, 2018

writing some manual test methods for binary suffix string tree

Image
I am waiting for a python machine learning book to get here via the Amazon fairy, so in the meantime I decided to play with the exercise from class I left off on.

The goal I had in mind was to put words as the key, and a suffix list as the value into the binary tree.  Then add the binary searchs through the suffix list into the tree.

So someone could search through the tree for the suffix they were looking for, and get any key value that had a matching suffix in their suffix list.

If I had a tree of DNA strings say.... and I wanted to make a tree, and find all the key DNA sequences that contained 'AcTGAT'  , it could return me a list of them.
I don't have it accomplished yet, but I just started doing this to get my head back into this files code.

I find it extremely useful to just tinker around with stuff.  See what I can make it do.  Might not be useful to whoever is reading this, but I find it fun and worthwhile.

Picture: Using my PrintTree class I made a ways back, This…

blogger and links

Image
Update: 6-20-18
I finally typed the right thing into the search bar to find the answer....
*hint: I was looking for blogger answers, when I should have been looking at blogspot....
Totally... seriously.... there's a button that does it. It's been there the WHOLE TIME!!!   It says Link   Awww damn it....


Lesson learned.... look for the simplest solution FIRST!

**  This was wrong link... I am working on finding the blogger/help forum I found where blogger said they were trying to fix an issue with links... So no link unless i can find it.  So I can't confirm it yet.
* Going to try this one though:
https://productforums.google.com/forum/#!topic/blogger/PUr4IT7h4YI


well....  I'm still working on getting links to work.  I made a little progress, there aren't two sets of error's on the console.

Followed instructions to set the content to allow HTTPS in settings: Basic

Now going to try and change some avatar...  Looks like blogger as an old .js script that is messing …

Opps.

I've been neglecting the blog!

I started the learn C the hard way book.  I'm out of the Junior Development course for a bit,  but there's no reason to stop learning.

(as always drop comments! Hate the way I do it? like it? What can I do better?)
(blog isn't helpful?  tell me what would be helpful!)

So,  I finished Wiwa for now.  I want to do her in C when I get a better grasp of it.  I'll start posting things for the C journey.

For now, Here is the python file for Wiwa.  She is on github, a public repository with all the files to go with it.  The setup1.py file I made is for linux, I need to get that more universal so any machine can install the stuff you need without much fuss.  I'm also thinking a file that will set up and activate a virtual environment for the user.  When I get to it, I'll post it.

So! 
<a href="https://github.com/nelliesnoodles/Whispering-Wall"> Github Link to Wiwa </a>
update: This is not easy to fix! So maybe it do…

more nltk tinkering

Found a way to kind of filter out verbs I didn't want.  Also came up with a few more methods to play with nltk and it's different things.
Disclaimer, I've manually tested with print statements and such, but have not yet written a 'pytest' for it.

Updated nltk0_ex.py file:




#!usr/bin/python3
# -*- coding: utf-8 -*-
import sys
import re
from nltk.corpus import wordnet
from random import randint
import nltk as nltk



# place script1, script2,  sys.argv[]  here
#script1 = sys.argv[1]
#script2 = sys.argv[1]

"""
  Requires:
above imports and :
install - nltk
install - python3
In your python3 shell type these to download needed data sets:
>>>import nltk
>>>nltk.download('wordnet')
>>>nltk.download('punkt')
>>>nltk.download('averaged_perceptron_tagger')
make_noun_response() -- requires script1 as sys.argv
make_verb_response() -- requires script2 as sys.argv
   action_verb_getter() -- requires script2 as sys.argv
""&…

methods to play with nltk so far

Image
Update: Found some new things to do....  Putting them on next post, with updated code.

I'm not having much luck doing anything useful with the verbs of the user (input).  I can't seem to find anything in the nltk stuff online that might suggest they have a way of distinguishing 'action verbs' from the other verbs.  Action verbs would be very useful.
I'll make some pictures to show off a bit.   The script1.txt, and script2.txt I'm using as the sys.argv[1] are simple files with questions.  I'll do a pic of that too.

So, here is my tinkering so far.  And it is really fun to play with.  Maybe I'm odd, I think it's fun to pretend my computer has my quirky sense of humor.

Pictures:





noun response
method:
make_noun_response()



verb response
method:
make_verb_response()






 script1.txt

noun response questions








Code:

#!usr/bin/python3
# -*- coding: utf-8 -*-
import sys
from nltk.corpus import wordnet
from random import randint
import nltk as nltk


# place script1, scrip…

to make a script or not

Image
Still playing with the idea of making a script file for my Wiwa to read from... it wouldn't be that hard...

A pic of me playing with it.   She only has 6 questions so far to play with:





















Here's a little code a worked up to test out the theory:
It will need some error proofing, but,  It seems like I could just pop a file full of scripted questions for her to ask, and have the bot pick at random from the text file and print them off.

Problem is, if she picks a lot of the same questions over and over, which random does do sometimes, it won't seem very real.  But, still playing with all the ideas.

Also, going to play with putting questions into an SQL db, and have her get them from there.... I don't know that it's better....  a text file is very hackable, but takes up so little space, and it's all for fun and learning at the moment. We'll see.

A text file would also mean someone change her to their whim.  She'd be easily modifiable. And that would be fun, or …