Document
stringlengths
395
24.5k
Source
stringclasses
6 values
|CSC 161||Grinnell College||Fall, 2011| |Imperative Problem Solving and Data Structures| Certain variable types, such as arrays, are always passed in by reference. Primitive data types, such as ints, chars, and doubles, are passed by value by default. However, you can also pass in primitive data types by reference using the & operator. Look over amp-examp.c and write a few sentences explaining what it does, and why it is possible. Copy program rand-beep.c into your directory for this lab and run it a few times to see what it does. Write a few sentences explaining what the program is doing, making sure include the following points: Point out every time a primitive value is passed in by reference to a function. Note how values passed in by reference are called within the function. What durations are possible for the robot to beep for? What frequencies are possible for the robot to beep for? When will the loop end? Copy get-ir.c into your directory for this lab and write a few sentences explaining what the program is doing. Complete the parts of the program which say "code here!". It should only require one line of code at each spot, and there are three locations where you must add code. Hint: If you are getting the warning "assignment makes pointer from integer without a cast", try putting * (an asterix) in front of the integer you are assigning to. Copy mult-ret.c into your directory for this lab and answer the following questions: Is the program currently using the arrays or the int variables to display the light and obstacle sensors? Would you be able to modify the program so that it doesn't use any arrays? Modify the program so that instead of taking in arrays, set_lights and set_obsts take in three integers addresses (light0, light1, light2) and (obst0, obst1, obst2) and set them to the appropriate values. These values should then be printed in the printf statements. There should be no arrays in your new version of the program. You will no no longer be able to use rGetLightsAll() and rGetObstacleAll(). Write a simple function which finds the square root of a number. It will have the following signature: int safe_sqrt (double * num). It will use the double sqrt(double num) function from the math.h library to find the square root of num. Make sure you compile with the -lm flag when using sqrt(). If num is zero or positive, your function will modify num to be its square root and return 1 to indicate success. If num is negative, your function will not modify num and return 0 to indicate failure. You will have to use the & operator when calling your function so that num is passed in by reference.
OPCFW_CODE
class Rule: # gives a A => B form of an rule def __repr__(self): lnd = list(self._condition.keys()) condition = "( " for x in range(0, len(lnd)): if not self._condition[lnd[x]]: condition += '!' condition += lnd[x] if x != len(lnd) - 1: condition += ' ^ ' if x == len(lnd) - 1: condition += " )" lnd = list(self._conclusion.keys()) conclusion = "( " for x in range(0, len(lnd)): if not self._conclusion[lnd[x]]: conclusion += '!' conclusion += lnd[x] if x != len(lnd) - 1: conclusion += ' ^ ' return str(condition) + " => " + str(conclusion) + " ) " # constructor def __init__(self, facts, condition, conclusion, uncertainty): self._condition = dict() self._conclusion = dict() self._checker = facts self._uncertainty = uncertainty # init condition for literal in condition: _temp = list(literal.keys())[0] # if fact doesnt exist create it if not (_temp in facts): facts[_temp] = None # links checker self._condition.update(literal) # init conclusion for literal in conclusion: _temp = list(literal.keys())[0] self._conclusion.update({_temp: literal[_temp]}) if _temp not in self._checker: self._checker.update({_temp: None}) # getters def get_condition(self): return self._condition def get_conclusion(self): return self._conclusion def get_uncertainty(self): return self._uncertainty def validate(self): has_false = False has_none = False for key in self._condition: has_none += self._checker[key] is None has_false += self._checker[key] != self._condition[key] and self._checker[key] is not None if bool(has_false): return False elif bool(has_none): return None else: return True def count_none(self): count = 0 for x in self._condition: if self._checker[x] is None: count += 1 return count def is_deductable(self, questions): dedu_count = 0 for x in self._condition: if x not in questions: dedu_count += 1 return dedu_count > 0 def has_implied(self, inter_rule): for x in self._condition: if x in inter_rule: return True return False
STACK_EDU
At our core, we believe that companies should prioritize authentic customer interactions. Understanding our customers on a deeper level helps us to develop products and services that truly resonate with them. As more business activities move online, it can be difficult to create meaningful face-to-face interactions. While data analysis tools like store analytics provide valuable insights into customer behavior, they often fail to capture the human element behind customer behavior. That's where we come in. Our mission is to help you connect with your customers in a more meaningful way, allowing you to gain deeper insights into your customers' needs and wants. With this knowledge, we can work together to strengthen customer relationships, elevate the customer experience, and ultimately drive growth for your business. We treat each customer not as a statistic, but as a person with whom the user can relate to. We adapt to different user groups of users on different platforms We provide guides only when users need them. We make our users and their customers our primary focus. After graduating from university, I joined Adways Inc. in 2006. I experienced a wide range of operations from ad tech to management, and left the company in 2009. In 2016, I founded EDOCODE and became its representative director. My goal is to develop products to reduce inconvenience in the world through the power of software. I'll take a bold bite of Gojiberry on top of my almond tofu pudding! My name is Timmy! I was born and raised in the U.S. but I'm a Kansai person at heart, and I'm a Product Manager at Gojiberry. I have experience in multiple startups, and I'm a big fan of people who are working hard to make their businesses successful. I enjoy Gojiberry on my cereal every morning! My name is Sherry and I am leading design at Gojiberry. I was born in Beijing, China and moved to the U.S. at the age of 14. I studied Human Computer Interaction at Carnegie Mellon University, and I had experience designing e-commerce apps in the U.S. Before joining Gojiberry team. My goal is to help people discover things they didn't know before. I like Gojiberry best when it's on almond tofu pudding! My name is Desmond and I am an engineer for Gojiberry at EDOCODE. I was born in Brunei, studied in the UK and now live in Tokyo. I'm happy to be a part of a team that can deliver a great product to the world, and I think Gojiberry tastes great in soup! My name is Anne-Lise and I am French. I am a web engineer at EDOCODE. I live in Tokyo, Japan. I'm passionate about space, I'm vice president of the association Open Space Makers. I run marathons and trails. I love cats, Gojiberry and chocolatey cookies the most. My name is Saloni. I am an Indian engineer working with Gojiberry. I was born and brought up in India and graduated from IIT. I enjoy reading about mathematical topics like chaos theory and instances of paradox. I aspire to own a business one day. I'm delighted to be a part of a dynamic and talented team working towards delivering a wonderful products. Interestingly, I've tried all berries except for the Gojiberry. My name is Amee and I joined Gojiberry as the Marketing Manager. I was born in Guangzhou, China and moved to Canada at the age of 12. I then moved to the US for university, where I studied Psychology with a business focus at the University of California, Berkeley. Prior to Gojiberry, I have worked at various startups and founded an organisation called Startup Lady Japan with the mission to support women entrepreneurs. I’d like to make the world a better place through technology and innovation. I enjoy Gojiberry in my soup and in my tea! My name is Frank, and I'm a engineer at Gojiberry. I grew up in Melbourne, Australia, and studied Computer Science and Japanese at Monash University. It's great to be part of a vibrant startup! In my free time, I like jazz piano and Playing board games. Nothing like a warm cup of Gojiberry tea! I’m Yuping, an engineer at Gojiberry. Raised in Taiwan and having spent about 7 years in Japan post-college, I’m now working remotely from Taiwan. I’m thrilled to be part of a diverse team committed to creating impactful products. I’m a foodie and my favorite way to enjoy Gojiberry is in chicken soup! My name is Yutaka, and I'm an engineer at EDOCODE. I studied Computer Science in the United States. I am very excited to be a part of this proficient team. In my free time, I like walking with my wife and listening to Jazz. The favorite way to eat Gojiberry is with cereal and milk. I'm Tingya, a Taiwanese designer at EDOCODE working on Gojiberry. I pursued my studies in industrial design in both Taiwan and Japan. Before joining EDOCODE, I worked as an industrial designer, where my passion lied in enhancing the interaction between users and products. I like Gojiberry in soup! My name is Jenna, and I’m a design intern for Gojiberry! I was born and raised in the US and moved to Tokyo in 2023 after graduating from Stanford University with a B.S. in Product Design. I am passionate about human-centered design and am excited to be a part of such a diverse and close-knit team. Outside of work I enjoy hiking, climbing, and listening to audiobooks. I look forward to trying gojiberries in all the ways my teammates have mentioned!
OPCFW_CODE
I have a ESXI 5.0 host on a HP ML350G6. ESXi boots off an SD card. It previously had a RAID-1 array with 2 x 300GB SAS disks. I have added another 2 idential disks, and converted the array to a 558GB RAID-10 array using HP Array manager. ESXi sees the additional space correctly in Configuration / Storage Adaptors (now shows total size as 558GB instead of 274GB) In Datastore properties, it shows total capacity in top left corner as 274GB, and the Device in bottom right corner as SAS Disk 558GB. (See attached screen shot) The previous RAID-1 array had a single data store added taking up the entire drive, it had not been extended etc. When I click on Increase in the Datastore properties, no free space is shown. Same thing if I attempt to create a new datastore. How do I get the extra space added into the datastore without backing up the VMs and deleting the datastore and recreating it? It is a single host site, so I cant move them to a different host. Are you sure you are booting off the SD card? The partition layout looks like the Hypervisor is installed on the disk!? Anyway, please take a look at Can't increase Datastore capacity VMFS 5.54 where I discussed how to manually increase the local datastore with ESXi 5. Well I certainly thought I was booting of the SD Card, but I was surprised by how many partitions there were on there as well. Would that have made a difference to being able to extend it? I will give it a go using the command line methods. Would that have made a difference to being able to extend it? It looks like the GUI only offers (supports) growing a VMFS datastore if this is the only partition on a disk/LUN. With the installation on the SD card and only the VMFS datastore on the disk this should have worked. Well I certainly thought I was booting of the SD Card, but I was surprised by how many partitions there were on there as well. Are you really sure that you are booting from the SD card? Try to remove it and start the server and see if it still works. As noted above, this is the partition layout of ESXi, and the 4 GB partition ("scratch") is only created when ESXi is installed on hard disk. Should I be able to see the SD card from vsphere client, and see what partitions are on it? Message was edited by: a.p. - removed personal data (phone, email, ...) You could log in through the ESXi Shell and run this command: Watch for a 4.0 GB vfat partition, it should not be there if booting from SD Card / USB.
OPCFW_CODE
import HTMLObject from './HTMLObject'; import { Point, Transform, Rect, } from '../../../tools/g2'; describe('Figure HTML Object', () => { let parentDiv; let h; let mockElement; beforeEach(() => { parentDiv = { clientWidth: 1000, clientHeight: 500, getBoundingClientRect: () => new Rect(100, 100, 1000, 500), }; // Element is initially in center of the parent Div mockElement = document.createElement('div'); // mockElement.getBoundingClientRect = () => new Rect(475, 240, 50, 20); Object.defineProperty(mockElement, 'clientWidth', { writable: true, value: 50, }); Object.defineProperty(mockElement, 'clientHeight', { writable: true, value: 20, }); mockElement.style.left = '475px'; mockElement.style.bottom = '240px'; // mockElement.style.width = '50px'; // mockElement.style.height = '20px'; // mockElement.clientWidth = 50; // mockElement.clientHeight = 20; // const mocker = document.createElement('div'); // document.body.appendChild(mocker); // mockElement = { // style: {}, // getBoundingClientRect: () => new Rect(475, 240, 50, 20), // clientWidth: 50, // clientHeight: 20, // getAttribute: mocker.getAttribute, // setAttribute: mocker.setAttribute, // }; const element = document.createElement('div'); const inside = document.createTextNode('test text'); element.appendChild(inside); element.setAttribute('id', 'html_test_element'); document.body.appendChild(element); h = new HTMLObject(parentDiv, 'html_test_element', new Point(0, 0), 'middle', 'center'); h.element = mockElement; h.originalElement = mockElement; }); test('Instantiation', () => { expect(h.location).toEqual(new Point(0, 0)); expect(h.yAlign).toBe('middle'); expect(h.xAlign).toBe('center'); expect(h.parentDiv).toBe(parentDiv); expect(h.element).toBe(mockElement); }); test('glToPixelSpace', () => { let gl = new Point(0, 0); let pixel = h.glToPixelSpace(gl); let expected = new Point(500, 250); expect(pixel).toEqual(expected); gl = new Point(-1, 1); pixel = h.glToPixelSpace(gl); expected = new Point(0, 0); expect(pixel).toEqual(expected); gl = new Point(1, -1); pixel = h.glToPixelSpace(gl); expected = new Point(1000, 500); expect(pixel).toEqual(expected); }); describe('transformHtml', () => { test('Center, middle, (0,0) and transforms', () => { h.transformHtml(new Transform().matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('475px'); expect(h.element.style.top).toEqual('240px'); expect(h.element.style.visibility).toEqual('visible'); h.transformHtml(new Transform().translate(0.1, 0.1).matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('525px'); expect(h.element.style.top).toEqual('215px'); expect(h.element.style.visibility).toEqual('visible'); h.transformHtml(new Transform().translate(0.1, 0).rotate(Math.PI / 2).matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('475px'); expect(h.element.style.top).toEqual('215px'); expect(h.element.style.visibility).toEqual('visible'); }); test('Left, right, bottom, top', () => { h.xAlign = 'left'; h.transformHtml(new Transform().matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('500px'); expect(h.element.style.top).toEqual('240px'); expect(h.element.style.visibility).toEqual('visible'); h.xAlign = 'right'; h.transformHtml(new Transform().matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('450px'); expect(h.element.style.top).toEqual('240px'); expect(h.element.style.visibility).toEqual('visible'); h.xAlign = 'center'; h.yAlign = 'top'; h.transformHtml(new Transform().matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('475px'); expect(h.element.style.top).toEqual('250px'); expect(h.element.style.visibility).toEqual('visible'); h.yAlign = 'bottom'; h.transformHtml(new Transform().matrix()); expect(h.element.style.position).toEqual('absolute'); expect(h.element.style.left).toEqual('475px'); expect(h.element.style.top).toEqual('230px'); expect(h.element.style.visibility).toEqual('visible'); }); }); test('getBoundaries', () => { mockElement.getBoundingClientRect = () => new Rect(475, 240, 50, 20); const b = h.getBoundaries(); expect(b[0].map(p => p.round(3))).toEqual([ new Point(-0.25, 2.36), new Point(-0.15, 2.36), new Point(-0.15, 2.28), new Point(-0.25, 2.28), ]); }); });
STACK_EDU
TypeError: Cannot read property 'call' of undefined I have this weird problem where after I run a fresh new instance I have my css broken and after some refreshes (cache?) it works normal. [Vue warn]: Error in beforeCreate hook: "TypeError: Cannot read property 'call' of undefined" found in ---> <Home> at src/pages/Home.vue <App> at src/App.vue <Root> warn @ vue.esm.js?a026:591 vue.esm.js?a026:1741 TypeError: Cannot read property 'call' of undefined at __webpack_require__ (app.js:765) at fn (app.js:128) at eval (eval at ./node_modules/css-loader/index.js?sourceMap!./node_modules/vue-loader/lib/style-compiler/index.js?{"optionsId":"0","vue":true,"scoped":false,"sourceMap":true}!./node_modules/vue-multiselect/dist/vue-multiselect.min.css (13.js:32), <anonymous>:1:28) at Object../node_modules/css-loader/index.js?sourceMap!./node_modules/vue-loader/lib/style-compiler/index.js?{"optionsId":"0","vue":true,"scoped":false,"sourceMap":true}!./node_modules/vue-multiselect/dist/vue-multiselect.min.css (13.js:32) at __webpack_require__ (app.js:765) at fn (app.js:128) at eval (eval at ./node_modules/extract-text-webpack-plugin/dist/loader.js?{"omit":1,"remove":true}!./node_modules/vue-style-loader/index.js!./node_modules/css-loader/index.js?sourceMap!./node_modules/vue-loader/lib/style-compiler/index.js?{"optionsId":"0","vue":true,"scoped":false,"sourceMap":true}!./node_modules/vue-multiselect/dist/vue-multiselect.min.css (13.js:43), <anonymous>:4:15) at Object../node_modules/extract-text-webpack-plugin/dist/loader.js?{"omit":1,"remove":true}!./node_modules/vue-style-loader/index.js!./node_modules/css-loader/index.js?sourceMap!./node_modules/vue-loader/lib/style-compiler/index.js?{"optionsId":"0","vue":true,"scoped":false,"sourceMap":true}!./node_modules/vue-multiselect/dist/vue-multiselect.min.css (13.js:43) at __webpack_require__ (app.js:765) at fn (app.js:128) Reproduction Link None Steps to reproduce Hard to reproduce because after couple of refreshes problem goes away. Expected behaviour To behave as expected I guess. Actual behaviour Breaking css Can’t really help you based on the description you provided. Please try to create a reproduction link. The website I think you’re using the component seems to work properly. Maybe it’s an issue with your setup? Where are you adding the css file? I’m using it exactly how documentation states. You could try opening it in incognito mode - if that wont expose the error then I will happily provide sample of that. Yes, the website is from Dermot’s that contacted You recently :) we really need that working and your lib is best out there. W dniu czw., 16.08.2018 o 22:28 Damian Dulisz<EMAIL_ADDRESS>napisał(a): Can’t really help you based on the description you provided. Please try to create a reproduction link. The website I think you’re using the component seems to work properly. Maybe it’s an issue with your setup? Where are you adding the css file? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/shentao/vue-multiselect/issues/791#issuecomment-413674427, or mute the thread https://github.com/notifications/unsubscribe-auth/AFU0Ur9dy3DLdgT99Dio4D4E6n-6Q1pzks5uRdX5gaJpZM4Vp_gZ . Tried it in incognito mode in Firefox, Opera and Chrome – no errors related to Multiselect. The component also looks alright on every attempt. Could you please show the code where you import the CSS? The stack points at app.js. Sure, so it looks like this when instance is fresh: And here how code is placed: Could you try to move the CSS import to main.js or app.js (the root file of your app) and import it like this? import 'vue-multiselect/dist/vue-multiselect.min.css' // or require('vue-multiselect/dist/vue-multiselect.min.css') Moved there, but no luck. Here's the main.js file: import Vue from 'vue' import VueRouter from 'vue-router' import router from './router' import store from<EMAIL_ADDRESS> import 'vue-multiselect/dist/vue-multiselect.min.css' import BootstrapVue from 'bootstrap-vue' import 'bootstrap/dist/css/bootstrap.css' import 'bootstrap-vue/dist/bootstrap-vue.css' import VueAnalytics from 'vue-analytics' const isProd = process.env.NODE_ENV === 'production' Vue.use(BootstrapVue) Vue.use(VueRouter) Vue.use(router) Vue.router = router Vue.use(VueAnalytics, { id: 'UA-122813704-1', router, debug: { enabled: false, //!isProd sendHitTask: isProd } }) Vue.config.devtools = true Vue.config.performance = true Vue.config.productionTip = false import App from './App' new Vue({ store, el: '#app', router, template: '<App/>', components: { App }, computed: { isAuthenticated: vm => vm.$store.getters.isAuthenticated } }) I do get one error related to style.css that the page tries to load but is not available. No other issues. Also, it seems you’re having this bug in development mode (I can see the HMR log). This is really strange. When next deploy will occur I will let you know - that's when it happens when on production. Okay! :) My biggest apollogies. I turned out that Webpack does not handle dynamic imports too well at the beginning and that's why this error showed. Sorry for bothering and thanks for helping out :)
GITHUB_ARCHIVE
Released Sep 246th 2019 Upgrade the Command Line Interface (CLI) with npm install --global @sanity/cli Upgrade the Content Studio with: 🐛 Bugfixes and improvements - Fixed a bug in the editor for Portable Text where span text containing an annotation would not be patched correctly (#1465). - Fixed a bug in the editor for Portable Text where custom block objects containing the key .childrenunder some circumstances would not be patched correctly (#1468). - Previously, if a custom input didn't have a (React) ref, the studio would crash and display an error message. Since it's a lot more common now to create function components in React and function components can't be given refs, we made sure to display a warning pointing at the problem and the actual input instead of failing. More details can be found in our help article (#1467). - Fixed a bug in history view, where signed out SSO users would cause the studio to crash (#1469). - When there is only one available tool (for example the desk-tool by default) we no longer show the tool switcher (#1458). - If you add your studio URL to the list of allowed CORS origins but forget to check for Allow credentials, you will now get a more helpful warning (#1431). - Improved the way available updates to the Studio are displayed (#1472) 📓 Full changelog |Per-Kristian Nordnes||form-builder Various Portable Text Editor fixes (#1465)||e4dbc68c6| |Bjørge Næss||structure Add type assertion||d39fdbc57| |Per-Kristian Nordnes||form-builder Block editor: Fix bug where inline texts was not patches correctly (#1468)||913a2f922| |Bjørge Næss||form-builder Gracefully handle missing input refs (#1467)||ddb47a9f3| |Per-Kristian Nordnes||components Guard against null users in HistoryListItem (#1469)||f9523faf0| |Kristoffer J. Sivertsen||default-layoyt Hide toolswitcher when no choices (#1458)||abe0d2623| |Even Westvang||desk-tool Remind about credentials when adding CORS origin (#1431)||29b2db206| |Per-Kristian Nordnes||form-builder Various Portable Text Editor fixes (#1473)||90c301f18| |Marius Lundgård||default-layout Fix navbar for storybook||2eabb7954| |Victoria Bergquist||default-layout improve studio module update indicator (#1472)||c8647ffd6|
OPCFW_CODE
Buy zolpidem tartrate powderlike it View all 1722 reviews $0.32 - $3.91 per pill want to buy ambien 10mg with mastercard Ryan, as in his earlier stints in rehab, openly refuses to conform to the facility's rules; the facility's director, speaking at a follow-up hearing after Ryan's first 60 days at Oasis, buy zolpidem tartrate powder informs the buy drug zolpidem 10mg in korea judge of Ryan's continued rule-breaking and asks the judge to remind Ryan of the terms of the sentence. Unlike those in the play, the protagonists of the novel succeed in directly and unambiguously murdering not just one but ten victims, and the book ends in a lengthy trial scene not in the theatrical version. The design of this machine was facilitated through the use of computer-aided design software. From such perspective, transhumanism is perpetually aspirational: There are no FDA-approved medicines for idiopathic hypersomnia. buy zolpidem tartrate powder This is a partial list of molecules that contain 12 carbon atoms. Taken together, buy zolpidem tartrate powder the studies above suggest an want to buy ambien online no prescription important evolutionary mechanism in which more food is eaten, more nutrients are stored, and less energy is expended by an organism during times of stress. The most common serious complications of buy zolpidem tartrate powder surgery are loss of urinary control and impotence. While he said in June 2014 that his comments were taken out of context, he does not regret making them. It has also been used to treat nocturnal enuresis because of its ability to shorten the time of delta wave stage sleep, where wetting buy zolpidem tartrate powder occurs. Ototoxic drugs include antibiotics such as gentamicin, streptomycin, tobramycin, loop diuretics such as furosemide and platinum-based chemotherapy agents such as cisplatin and carboplatin. Even if it is not a common substrate for buy zolpidem tartrate powder metabolism, benzene can be oxidized by both bacteria and eukaryotes. Perhaps, due to buy zolpidem tartrate powder its price, it buy zolpidem tartrate powder never sold well. And, unfortunately, 90% of the entertainment community can relate. Four more boys were confirmed to be out of the cave and then taken to the hospital. Ambien 10mg uk buy online It is an analogue of methylphenidate where the phenyl ring has had buy zolpidem tartrate powder a 3,4-dichloro substitution added, and xanax for insomnia the piperidine ring has been replaced by cyclopentane. Burns and Company, which became the largest meat processor in western Canada. The latter three are known as the catecholamines. The damaged structure then produces the symptoms the patient presents with. When heated to decomposition this compound emits toxic fumes of nitrogen oxides. One, the Khamis Brigade, was led by his son Khamis. The benefit of amiodarone in buy zolpidem tartrate powder the treatment of atrial fibrillation in the buy zolpidem tartrate powder critical care population has yet to be determined but it may prove to be the agent of choice where the patient purchase zolpidem 10mg in korea is hemodynamically unstable and unsuitable for DC order xanax 1.5mg online with american express cardioversion. Sameridine is not currently a buy zolpidem tartrate powder controlled drug, although if approved for medical use it will certainly be a prescription medicine, and it would probably be assigned to one of the controlled drug schedules in more restrictive jurisdictions such as Australia and the United States, especially if it were found to be addictive in animals. Now that most people carry on the vedic soma plant buy online person a cell, the authorities have the ability to constantly know the location of a large majority of citizens. Similarly to tramadol, it was believed faxeladol would have analgesic, as well as antidepressant effects, due to its action on serotonin and norepinephrine buy zolpidem tartrate powder reuptake. Isopropyl alcohol is often sold in aerosol cans as a windshield or door lock deicer. She slips deeper into depression and, through the haze of her furthering drug addiction, decides to seek revenge on her fiancé's killer. Chris and Angie admit their feelings for each other. It was Turkey's third, the first woman-to-woman and the first three-dimensional with bone tissue. After they broke up, Rafael moved back to Brazil, where he got a different job and started another relationship. The capillary blood flow velocity also increased in each individual patient and the mean capillary flow velocity of all patients increased significantly. After quitting generic ambien online Wellman Plastics, Roseanne and Jackie must find new jobs. When we announced buy zolpidem tartrate powder our findings about Chris, some in the media said it was 'roid rage. The fact that Kat gets pregnant at all is amazing. Varied conditions include absent sexual organs, hermaphrodite and other genetic malformations, or trauma such as amputation or lacerations. In former times, butanol was prepared from crotonaldehyde, which can be obtained from acetaldehyde. This was the first time the band received a full credit. Skandia Insurance have their UK base there. During the prime-time special, Houston spoke about her drug use and her marriage, among other topics. Eventually only the silhouettes of previous rings will be left. It is used at very high doses by mouth or by intramuscular injection to treat this disease. They defeated Alianza in both finals, giving the club four championships in five years. The protein also contains a number of different allosteric binding sites which modulate the activity of the receptor indirectly. She is the president of Robert F. States have taken the lead on this issue. Chinese triad gangs eventually came to play a major role in the illicit heroin trade. While Betty claimed her story was straight out of Carrie, Hilda told Justin what really happened, since she was there that night. I always managed not to be punished.
OPCFW_CODE
Azure SQL External table of Azure Table storage data Is it possible to create an external table in Azure SQL of the data residing in Azure Table storage? Answer is no. I am currently facing similiar issue and this is my research so far: Azure SQL Database doesn't allow Azure Table Storage as a external data source. Sources: https://learn.microsoft.com/en-us/sql/t-sql/statements/create-external-data-source-transact-sql?view=sql-server-2017 https://learn.microsoft.com/en-us/sql/t-sql/statements/create-external-file-format-transact-sql?view=sql-server-2017 https://learn.microsoft.com/en-us/sql/t-sql/statements/create-external-table-transact-sql?view=sql-server-2017 Reason: The possible data source scenarios are to copy from Hadoop (DataLake/Hive,..), Blob (Text files,csv) or RDBMS (another sql server). The Azure Table Storage is not listed. The possible external data formats are only variations of text files/hadoop: Delimited Text, Hive RCFile, Hive ORC,Parquet. Note - even copying from blob in JSON format requires implementing custom data format. Workaround: Create a copy pipeline with Azure Data Factory. Create a copy function/script with Azure Functions using C# and manually transfer the data Yes, there are a couple options. Please see the following: CREATE EXTERNAL TABLE (Transact-SQL) APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Creates an external table for PolyBase, or Elastic Database queries. Depending on the scenario, the syntax differs significantly. An external table created for PolyBase cannot be used for Elastic Database queries. Similarly, an external table created for Elastic Database queries cannot be used for PolyBase, etc. CREATE EXTERNAL DATA SOURCE (Transact-SQL) APPLIES TO: SQL Server (starting with 2016) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Creates an external data source for PolyBase, or Elastic Database queries. Depending on the scenario, the syntax differs significantly. An external data source created for PolyBase cannot be used for Elastic Database queries. Similarly, an external data source created for Elastic Database queries cannot be used for PolyBase, etc. What is your use case? 80% of my organization's data is residing in Azure SQL and rest is in Azure table storage. I just want to analyze the both together. The best way I think is to create a external table in Azure SQL to the data pointing to Azure table storage. What do you reckon? And I am choosing Azure table because I have been getting 1GB of Json data daily since 2015 which Azure SQL wont be able to handle and cosmos db is expensive. I dont think I can use appendblob because it has 195GB upper limit.
STACK_EXCHANGE
First, sorry if some of my vernacular is off. I'm a software engineer, but not a home networking expert. Also, full disclosure, I posted this same question on reddit. I was directed to this thread in the comments, but I suspect my issue my different since I was experiencing it over wifi. I have a Linksys EA6350v3. It was consistently restarting under "heavy" (6 devices across both wireless signals, speeds around 40 mbps divided between all devices) loads. I'd connect my computer wirelessly and start updating a game -- no problem. Then I'd connect my PS4 wirelessly and start updating. A minute later I'd experience a restart. I flashed OpenWRT 19.07 and tried to the same pattern: connect computer and update, then connect ps4 and update. Again, a minute or so after both devices connected, it restarted again. I watched the logs while this happened and nothing notable seems to happen. I've attached system and kernel logs. The restart occured around 17:04 on the system logs. I connected both PC and PS4 at ~10:00. Also started streaming videos on youtube to poke the system into failing again if that's relevant. If you're seeing the same issue while running the stock OEM firmware as well, this would suggest a defective device - usually one would close the issue here and start the warranty replacement procedure (after flashing back the OEM firmware, of course). Things you might be able to look at: replace kmod-ath10k-ct with kmod-ath10k-ct-smallbuffers, to reduce memory pressure with many concurrent clients a bit. check that the PSU is o.k. (enough power, no voltage drop under load - maybe testing a compatible one --> same voltage/ polarization and plug type, at least the same current figures or better) check for overheating, maybe also opening the case and directing a fan into the device (this will void the remains of your warranty, so consider the replacement route before). Hey, thanks for the reply! I'll check that stuff out. I was worried a hardware issue may be the next step I recognize now this is getting further and further away from being an OpenWRT issue, so no sweat if this needs to be closed. I will add a final bit of strangeness if it elucidates anything obvious I should try: I realized that I have a second router provided by my ISP, which also seems to experience these restarts. Usually I'd say this points to a modem issue, but the routers are actually restarting, not simply losing connection (per the logs). Is the modem able to cause router restarts consistently across different firmwares + different hardware? I will also contact my ISP today to see if they have anything on there end. Another possibility (as hinted at by @slh) is that when the routers are unable to draw the required power from the outlet. I may need to get another device to check if this is the case, but it does seem to be a possible explanation.
OPCFW_CODE
Hello, viewers. Welcome to this latest video on post exploitation, hacking, persistence and continued access. This video is going to be discussing the third and final step of post exploitation hacking, which is covering tracks clearing data out clearing any record of your presence. Ah, this is going to be this particular video is going to be discussing Windows security logs and how to enumerating them and remove them. This is discussing sort of the hammer technique or the sledgehammer technique of hiding your presence, which is to essentially just set fire to the security logs. any vigilant system administrator will notice that all of their security larger gone, and that'll be a warning that someone was there. But, you know, doesn't tell him you were there. So depending on your situation and how much time you have and how will you are to let them know some sort of event took place? It's still a pretty valid option, not to mention windows, especially when the seven has made deleting specific log entries extremely difficult. So sometimes deleting everything is the safer course. So we're gonna go ahead and you were gonna learn about a tool in this called W E B T u till Windows event utility. So we see here it enables you to retrieve information about event logs and stalling on it's all above above a rock. first thing we're gonna do, we're just gonna pretty much run through all of this. All of these options 1st 1 is a new blocks. So Wow, look at that. Isn't that a bunch of logs? lots of Microsoft Windows stuff. Here's one that might be useful myself. Windows shell, maybe something we want to examine. Sticky note. Get lots of junk Terminal service's. That's definitely when we want to get rid off. Well, not all of them, but certainly we want to consider getting rid of these already. P that's a log of things that were, you know, where people who already peed in. let's check this again so we can get long configuration information. Okay, let's see what that does. it needs a specific log. Well, we've already found one that we're kind of looking at and considering getting rid of. So let's check All right, So this log is the ah, the rdp Operational log. publisher is Microsoft Windows. He's got some access information. It's gotta log file name. Okay, so if we just want to torch that log, we can try going into this location, see if it will let us in. You would appear. Well, probably get over this last part. All right, so you see lots of things again. We see remote connection manager event logs. I'm not actually going to delete these particular logs simply because I wanna have around for later demos and later information. But if you wanted to destroy logs, this is the place you could go. You could do Del Star, which pretty much torch everything. or you can do it a little bit more effectively, which is what we're gonna do in a minute. Like I said, I'll show you. So we've got get log, we set log. We can actually modify a lot of configuration. We could check publishers we can install blah Boba. Now, here's a good one. This is what we want. I want to be able to clear log also exporting like wouldn't be bad if you wanted to Ah, do some serious information gathering exporting a bunch of logs is a great way to go. But it's sort of time consuming and boring. We are going to clear a log, So even you till C l. That's not the one we want it all, is it? So we go back up here to this super interesting stuff. So this is the law. We're gonna go ahead and clear we're gonna do a C l instead of a geo and bam. It's cleared total. See anything interesting here Now? No noticeable changes have been made. However, when someone goes through with their actual log viewer with an event you were this will now show them absolutely nothing What would happen if we did this on, say, another log that we may have noticed a moment ago? first. We'll do it. Get log so we can see what's in it. There's no owning publisher. It's an administrative log. It is enabled all of this information, Max sighs. One thing that's worth doing. If you really wanna mess with an admin, Ted is setting this to disabled while you're doing your stuff and then sending it to enabled when you leave? Um, it won't work perfectly. It will record that it was disabled, but it won't record what you did. Which is a big part of you know how you can kind of get away with doing things that you probably shouldn't be doing. But in our case, we're just kind of doing the basics instead of anything to high level in specific, we're going to go ahead and So now we have actually gotten rid of that, and we've just emptied it And now we're fair now. We're feeling pretty safe, you know? They can't actually see us. The logs were gone. They know someone was here. Someone did something, but they don't know it was us. And they don't know what we did. Windows, event logs, contract anything you can set up a log extract pretty much anything you could imagine. So it's useful to Ah, really go through that list. The and actually go through this and just manually check. We're not manually, necessarily. We go through this and check for anything that might be what you're after. Any event log that might have information. Um, again, I mentioned the shell logs always worth considering getting rid of him. And the reason for this is really pretty straightforward. You never know what's actually being stored in them. So we see there are actually some other security you probably want to clear out. When in doubt with log. If you've already started clearing logs out, This power show law was worth getting rid of That's really all there is for this video. It was a pretty quick, pretty, straightforward pretty simple. all you really need to know about logs is if you don't have a special tool which can actually break in and change them better no log me present than along that tells them what you did with that. We're gonna go ahead and end of the video. I hope you learned a lot of hope. You, you know, experience some new things to do with logs and to do with clearing. And until next time, I'm just me, Joseph Perry. And you've been watching this on cyberia dot i t
OPCFW_CODE
controling tmpfs memory usage Any way to limit the amount of RAM used by tmpfs wihtout limiting the amount of swap? Most documentations says that tmpfs' size option will limit the total size of the tmpfs partition and later on will say that this space is used both by RAM and SWAP. And then says that the default is half your ram because if it uses up all the RAM you get OOM fatal errors. It's confusing. i'd love to have it using 1/4 of my ram but up to 3/4 of swap, for example. As far as I am aware you can't control which parts of the virtual memory system (i.e. RAM or swap) is used for a tmpfs. However, it is not true that creating a large tmpfs will cause OOM fatal errors. You can create a tmpfs bigger than your total RAM+SWAP because none of it is actually used until you put files into the tmpfs. When you do put files in the tmpfs, that will use memory, but only as much as the files you put into the tmpfs. If you then don't touch those files for a long time and the system needs to use the RAM more than it needs to keep them in buffer cache, those files will actually get backed from swap instead of RAM. When your demands on the tmpfs become a large portion of RAM, it's going to affect your buffer cache (things will stop being cached in RAM because it's needed for the tmpfs files). As demand grows, then it's going to start going into swap. Eventually when you have no buffer cache, all your swap is used and still more requests for memory are made, then and only then will you start to get OOM errors. So it is in fact safe to specify a large tmpfs for /tmp as long as you have a decent amount of swap too. You say that you'd be okay with it using 25% of your RAM and 75% of your swap. In that case, say you'd normally have 1G of RAM and 2G of swap. I'd set tmpfs to be 1G and boost swap up a bit, say to 3G. If your system comes under memory pressure, the first thing that is going to happen is that infrequently used files in /tmp will end up being backed by swap instead of RAM. You're not losing all your RAM by making a tmpfs the same size as RAM. it says "default 1/2 your memory" and here it means memory meaning physical RAM. but it can also extend to use SWAP. i'd like to have no more then 400mb for /tmp, but some weird applications, like UNetBootin insists on downloading ISO images to /tmp... i woudn't mind it downloading those files to swap on the rare ocasions i use it... but i would mind leaving my tmpfs being able to eat all my ram all the time. and that's what size=4g would do I'm sorry, I misread your question. I will severely edit my answer since it is currently mostly worse than useless. I don't at present know how to make it useful, unfortunately. thanks. I'm doing experiments now (mostly testing the assumptions you outlined) , will link to the debian forum thread when I have something conclusive. so how did the tests turn out (gcb?) @ck_ I got it work to work as I wanted (i could download an ISO using a liveUSB and then expand the iso on another block device without losing all my ram in the process) My memory is not working very well, but i recall i found other flags besides size in newer kernels. Also I can't find the thread on the debian forums... maybe it was deleted (it started with some crazy dude complaining i was 'cross-posting' by post the same question here as well. maybe some admin didn't notice there was useful discussion after the idiotic argument). will bring the system i used that time back up next week :)
STACK_EXCHANGE
Container Console works only in non Swarm enviroment Dear support, i've two portainer servers, one in a swarm envirorment, one in a standalone mode. Same version of portainer 2.15.1 Same distro OS Portainer in standalone let me use "Exec console" Portainer in Swarm leave the "Exec console to timeout". what i can check to fix it? @andreaconsadoriw Thank you for the information. I am going to further investigate. I will update you as I learn more. Thanks! @andreaconsadoriw Can you post your Portainer docker run install commands for both of those environments? Thanks! Or you can just run docker ps -a on Standalone and docker service ls on the Swarm and post. Thanks! The working portainer: docker ps -a | grep portainer b8351b686a23 portainer/portainer-ce:latest "/portainer" 4 days ago Up 4 days <IP_ADDRESS>:8000->8000/tcp, <IP_ADDRESS>:9000->9000/tcp, 9443/tcp the non-working portainer: docker service ls ID NAME MODE REPLICAS IMAGE PORTS mxu11xmevrpe portainer_agent global 3/3 portainer/agent:latest yhp46kvbwr38 portainer_portainer replicated 1/1 portainer/portainer-ce:latest *:8000->8000/tcp, *:9000->9000/tcp, *:9443->9443/tcp The working one has a direct docker connection but through it i can connect console on container over remote docker connected trought the edge agent. @andreaconsadoriw Are you trying to exec into the Portainer container? Or a different container? Not all containers allow exec console access. Thanks! via Swarm portainer i cannot connect to ANY console @andreaconsadoriw Are you selecting the Interactive and TTY flags when setting up the container? Are you able to exec into the container on the CLI outside of Portainer? Thanks! on the not swarm envirorment i don't need to change console settings and both works without any issue. on swarm env even if i activate interactive & TTY never works. outside the portainer via Linux Bash i can connect to container console. docker container exec -it 7cac0781fb27 /bin/bash bash-4.4$ can be a docker swarm networking issue? @andreaconsadoriw I am unable to reproduce. Can you send a screenshot of your Container page? The Console requires websockets. Are they open? And, you may want to check your Swarm networking. Thanks! Here my env If i try to connect to “onetimefiles” container (one where the console works fine from the cli) It sat forever on “connecting” @.*** The swarm nodes are on the same subnet and no firewall in the middle @.*** Console require websocket --> what are the prerequisites? Da: Tamara Henson @.> Inviato: giovedì 22 settembre 2022 09:42 A: portainer/portainer @.> Cc: Andrea Consadori @.>; Mention @.> Oggetto: Re: [portainer/portainer] Container Console works only in non Swarm enviroment (Issue #7684) @andreaconsadoriwhttps://whysecurity.urlsand.com/?u=https%3A%2F%2Fgithub.com%2Fandreaconsadoriw&e=400f5e61&h=70223e11&f=n&p=y I am unable to reproduce. Can you send a screenshot of your Container page? The Console requires websockets. Are they open? And, you may want to check your Swarm networking. Thanks! — Reply to this email directly, view it on GitHubhttps://whysecurity.urlsand.com/?u=https%3A%2F%2Fgithub.com%2Fportainer%2Fportainer%2Fissues%2F7684%23issuecomment-1254647850&e=400f5e61&h=8d806cd8&f=n&p=y, or unsubscribehttps://whysecurity.urlsand.com/?u=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAHWECNP6PQSRNKMFRRP6DFDV7QET7ANCNFSM6AAAAAAQOEMBHI&e=400f5e61&h=48a18918&f=n&p=y. You are receiving this because you were mentioned.Message ID<EMAIL_ADDRESS>
GITHUB_ARCHIVE
So it turns out we need remove the in styles.xml(v21). Before Follow this tutorial please first complete the project configuration about how to integrate admob in android . GOOGLE ADMOB IMPLEMENTATION ON ANDROID PROJECT Step 1 :Create an interstitial ad object Interstitial ads are requested and shown by InterstitialAd objects. The first step is instantiating InterstitialAd and setting its ad unit ID. This is done in the onCreate() method of an Activity: Step 2 : Always test with test ads When building and testing your apps, make sure you… ‘:app:transformDexArchiveWithExternalLibsDexMergerForDebug’. > com.android.builder.dexing.DexArchiveMergerException: Unable to merge dex Error:Execution failed for task Solution : Step 1 : Clean Step 2 : Rebuild if the you are facing the following problem update google service class path. Error:Execution failed for task ‘:app:transformDexArchiveWithDexMergerForDebug’. > com.android.build.api.transform.TransformException: com.android.dex.DexException: Multiple dex files define Lcom/google/android/gms/internal/measurement/zzws; update the google-service plugin to: classpath ‘com.google.gms:google-services:3.3.0’ Solution : Check all google version is same. If google play service is different then others then there may create conflict. Firebase and others google services should be in same version. Example : All are described here here https://developers.google.com/admob/android/quick-start. I am only arrange step by step procedure- Step 1 : Example project-level build.gradle (excerpt) add this on build.gradle(global) in repositories brackets as like – Step 2 : Step 3 : Update your AndroidManifest.xml Add your AdMob App ID to your app’s AndroidManifest.xml file by adding the tag shown below. You can find your App ID in the AdMob UI. Step 4 : Before loading ads, have your app initialize the Mobile Ads SDK… Step -1 : Add the permissions on Menifest. Step 2 : Request for permissions Step 3: Check permissions Step 4 : Call it on OnCreate to check permissions are granted or not. DOWNLOAD FROM GITHUB Part 1 : Go to the git repository and download/clone it https://github.com/googleads/googleads-mobile-unity Part 2: Download unity google admob latest plugin https://github.com/googleads/googleads-mobile-unity/releases Part 3 Create a Unity Project and import the plugin Part 4: Go to following directory of the repository you download of step 1 : google admob plugins\googleads-mobile-unity-master\googleads-mobile-unity-master\samples\HelloWorld\Assets\Scripts Part 5 : Copy the script to the assset folder Part 6 : Attach the script to MainCamera. Part 7: It should show default ads when you will build it for… Step 1: Create an empty game object named like BackgroundMusic. and add a tag name like “BackgroudMusic”. Step 2: Add a AudioSource and attach a music on that. Step 3: Create C# script named Background music or anything else . Attach the code to the gameObject
OPCFW_CODE
This page is a collection of methods and utilities for keybinding, keymapping, for Mac OS X. There are several ways to swap modifiers. The easiest is builtin preference panel. For more advanced modifier remapping (⁖ distinguish left Ctrl vs right Ctrl, remap Esc, map ▤ Menu key, remap Enter ↵ key …), you can use Karabiner. Some key changes cannot be done with Karabiner. You need Seil. Another popular tool is DoubleCommand. Mac OS X since 10.4 lets you add/change keyboard shortcut in a specific app. Just go to System Preferences, Keyboard & Mouse, Keyboard Shortcuts. Then click the + sign at button. Note: This mechanism is not very flexible, because: Buy a Microsoft keyboard then use the bundled IntelliType Software. Note: depending on what keyboard model you buy, not all features of IntelliType will be available. But basically, if the keyboard cost $30 or more, or has many special keys, most features will be there. Highly recommended. With this solution, you get a functional keyboard, and with a software that does all launcher/shortcut common needs. No need to spend hours tweaking keymaps or config files. For detail, see: Microsoft IntelliType Review. For Microsoft keyboard, i recommend: Most expensive gaming keyboards have reprogramable keys by firmware. This is the best solution, because, once a key is set in firmware, you do not need extra software for it to work. You can just plug in your keyboard in any computer, any Operating System, and the keys will behave as if you have program'd them. For example, you can program any key swapping, or set F12 to send 【Ctrl+w】. But, most of gaming keyboard's software is Microsoft Windows only. So, you'll need access to Windows machine first to program the keys. After that is done, you can use it on the Mac. (check with the keyboard maker to see if they have Mac software.) I recommend Logitech G710+ Mechanical Keyboard, or Truly Ergonomic Keyboard. • Quicksilver (software), home page @ http://qsapp.com/. A app launcher. Assign hotkey to launch/switch/open apps or files. The hotkey can be single key (⁖ F1) or combo-key (⁖ 【⌘ Cmd+F1】). • $$$ Keyboard Maestro @ keyboardmaestro.com A basic key macro software. Good, but a bit expensive. • $$$ QuicKeys @ startly.com. A comprehensive automation software, with key macro features, and also key macro recording abilities. I used it in 1990s for 10 years and find it the best. It was the number one most touted productivity enhancement software in Mac community in the 1990s. The company changed hand a few times over the years. The first Mac OS X version released around 2001 is not so good. Since then i haven't used it. Don't know how good it is today. You can use Mac OS X's system-wide mechanism by creating a key config file DefaultKeyBinding.dict. See: Creating Keyboard Layout in Mac OS X. If you use emacs on the Mac, see: If you are really serious about key mapping, that you want to change any key to any key. For example, weird things like making Tab ↹ as ⌘ Cmd, make ⌥ Opt type Space, make the number keypad keys as extra function keys, re-interpret the USB signals as they come in, then, these are the tools. (Note: i haven't really used them.)
OPCFW_CODE
Application Programming Interfaces (APIs) are a revolutionary new way to bring your app information, ratings, pricing, and app metadata into your website or platform. The Google Play Reviews API allows you to retrieve this data for both Google Play and the App Store. You may even use the API’s sophisticated analysis tools to compare the success of different apps. Here, we’ll show you how to integrate this API into your website or app. It’s fast, easy, and actually makes a big difference to the success of your business. First, sign up for Google Developer Console if you don’t already have an account. This is a requirement to use any Google app APIs, including the Google Play Reviews API. Next, create a new project in the console by selecting “Create New Project.” Once you have created a project, make sure that it is linked to your developer account by entering the same name as your project. Then create an API key by clicking “Create API Key” and adding your name and email address. All that is left to do is simply copy and paste the API key into the field provided. Now that you have completed all of these steps and have a working API key, you can start using the API by simply providing the app name and Android package name. You can retrieve information about the app ratings, reviews, comments, and more! Why Get An API For App Reviews? For many reasons, first of all because it is one of the most requested types of APIs among developers and companies that need this type of information. But also because it will help you get ahead on competition by having such an informative tool on your site or app. And finally, extracting such useful data from users reviews can help you improve the user experience of your site or app with just a few clicks! We recommend this API for all kind of people looking for an easy way to integrate an App Review API into their site or app. It is highly effective; it supports all coding languages so it can easily be integrated into any site or platform! Google Play Reviews API: One Of The Best Review Metadata APIs Around! The Google Play Reviews API is the definitive tool for developers who want to add reviews from Google Play into their application. This outstanding tool makes it simple to retrieve user reviews from Google Play. You can use this information to produce reports on user sentiment toward Retrieve App information, rating, pricing, and much more with this API. Supporting Google App Store and Apple App Store. To make use of it, you must first: 1- Go to Get Apps Info and Reviews API and simply click on the button “Subscribe for free” to start using the API. 2- After signing up in Zyla API Hub, you’ll be given your personal API key. Using this one-of-a-kind combination of numbers and letters, you’ll be able to use, connect, and manage APIs! 3- Employ the different API endpoints depending on what you are looking for. 4- Once you meet your needed endpoint, make the API call by pressing the button “run” and see the results on your screen.
OPCFW_CODE
package gas import ( "errors" "fmt" ) // UpdateWithError update component childes func (component *Component) UpdateWithError() error { oldChild := component.Element // oldChild if oldChild == nil { // it's Gas in first render component.RC.Add(&RenderTask{ Type: RFirstRender, New: component.RC.updateChild(nil, component.RenderElement()), }) component.RC.Exec() return nil } _oldChild := oldChild.BEElement() if _oldChild == nil { return errors.New("invalid _oldChild") } p := oldChild.Parent newChild := component.RC.updateChild(p, component.RenderElement()) isChanged, canGoDeeper, err := Changed(newChild, oldChild) if err != nil { return err } if isChanged { component.RC.Add(&RenderTask{ Type: RReplace, NodeOld: _oldChild, New: newChild, Old: oldChild, ReplaceCanGoDeeper: canGoDeeper, Parent: p, }) } if canGoDeeper { newChildE, ok := newChild.(*Element) if !ok { return fmt.Errorf("unexpected newE type want: *gas.Element, got: %T", newChild) } newChildE.UUID = oldChild.UUID err := component.RC.UpdateElementChildes(_oldChild, oldChild, newChildE.Childes, getOldChildes(newChildE, oldChild), true) if err != nil { return err } } if isChanged { component.RC.Add(&RenderTask{ Type: RReplaceHooks, Parent: p, New: newChild, }) } component.RC.Exec() return nil } // Update update component childes with error warning func (component *Component) Update() { component.WarnError(component.UpdateWithError()) } // ReCreate re create element func (e *Element) ReCreate() { e.RC.Add(&RenderTask{ Type: RRecreate, New: e, Parent: e.Parent, }) go e.RC.Exec() } // UpdateChildes update element childes func (e *Element) UpdateChildes() { e.OldChildes = e.Childes for i, childExt := range e.Childes { e.Childes[i] = e.RC.updateChild(e, childExt) } return } func (rc *RenderCore) updateChild(parent *Element, child interface{}) interface{} { if _, ok := child.(*Component); ok { childC := child.(*Component) childC.RC = rc child = childC.RenderElement() } childE, ok := child.(*Element) if !ok { return child } childE.RC = rc childE.Parent = parent if childE.Attrs != nil { childE.RAttrs = childE.Attrs() } childE.UpdateChildes() if childE.HTML != nil { childE.RHTML = childE.HTML() } return child } // UpdateElementChildes compare new and old trees func (rc *RenderCore) UpdateElementChildes(_el interface{}, el *Element, new, old []interface{}, inReplaced bool) error { for i := 0; i < len(new) || i < len(old); i++ { var newEl interface{} if len(new) > i { newEl = new[i] } var oldEl interface{} if len(old) > i { oldEl = old[i] } err := rc.updateElement(_el, el, newEl, oldEl, i, inReplaced) if err != nil { return err } } return nil } // updateElement trying to update element func (rc *RenderCore) updateElement(_parent interface{}, parent *Element, new, old interface{}, index int, inReplaced bool) error { // if element has created if old == nil { rc.Add(&RenderTask{ Type: RCreate, New: new, Parent: parent, NodeParent: _parent, InReplaced: inReplaced, }) return nil } _childes := rc.BE.ChildNodes(_parent) var _el interface{} if len(_childes) > index { _el = _childes[index] } else { if IsElement(old) { _el = I2E(old).BEElement() } else { return nil } } // if element was deleted if new == nil { rc.Add(&RenderTask{ Type: RDelete, NodeParent: _parent, Parent: parent, NodeOld: _el, Old: old, InReplaced: inReplaced, }) return nil } // if element has Changed isChanged, canGoDeeper, err := Changed(new, old) if err != nil { return err } if isChanged { rc.Add(&RenderTask{ Type: RReplace, NodeParent: _parent, NodeOld: _el, Parent: parent, New: new, Old: old, ReplaceCanGoDeeper: canGoDeeper, InReplaced: inReplaced, }) } if !canGoDeeper { return nil } newE := new.(*Element) oldE := old.(*Element) newE.UUID = oldE.UUID // little hack // if old and new is equals and they have html directives => they are two commons elements if oldE.HTML != nil && newE.HTML != nil { return nil } err = rc.UpdateElementChildes(_el, newE, newE.Childes, getOldChildes(newE, oldE), isChanged) if err != nil { return err } if isChanged && !inReplaced { rc.Add(&RenderTask{ Type: RReplaceHooks, NodeParent: _parent, NodeOld: _el, Parent: parent, New: new, Old: old, }) } return nil } func getOldChildes(newE, oldE *Element) []interface{} { if newE.IsPointer { return newE.OldChildes } else { return oldE.Childes } }
STACK_EDU
Recently,the 11th (2018) Intel Cup National Collegiate Software Innovation Contest for College Students (NSICCS) was held at Beijing Institute of Technology. Team "Turing" from Shandong University won the grand prize of this contest as well as other two awards, namely theAwards of the Largest Entrepreneurial Potentialand the Most Popular Award for Geeks. Other three teams from SDU won the first, second and third prize respectively. The theme of this contest is "application and innovation based on artificial intelligence". Four teams of undergraduates from SDU entered the National Top 20 Final. Team "Turing" includes four undergraduates: Zhou Yang, Wu Jiaqi and Li Song from the College of Software and Dou Zhiyang from the College of Computer Science and Technology. Instructed by Associate Professor Dai Hongjun from the College of Software, this team developed a poster design system called "Davinci's Brush". For the purpose of "everyone can be a painter", this team quantified the four principles of poster design, graphic design rule and multiple design patterns, integrated all the above rules into the intelligent production of their work and created a complete automatic poster design system. A team of four students from the College of Software won the first prize. They are: Yu Junyi, Xu Yichu, Xu Chaoyu and Ji Shaokang. Their work is a kind of audiobook service that helps users create emotional audiobooks and automate the abstract of a book. Another team from SDU's College of Software won the second prize andthe Award of the Largest Entrepreneurial Potential. The team members Kuang Haozhao, Guo Zhuo Qiang, Liu Penghao and Ni Huanqi worked together and designed a set of smart table tennis ball machines. Zhang Qian, Weng Qingtao from the College of Software and Wei Ran and Li Dongdong from the College of Computer Science and Technology together designed an aided recognition system of plagiarism that won the third prize. All these works fully demonstrate the innovative spirit and technical ability of SDU students in the field of artificial intelligence and other software and received consistent acclaim from Intel Corporation's technological experts and experts from Teaching Steering Committee of the Ministry of Education. NSICCS began in 2008. Ever since November 13, 2017, the 11th contest has received a great many innovative works from 252 teams of 1411 students all across the country. After the preliminary contest and the final, a total of 1 grand prize, 3 first prizes, 6 second prizes, 10 third prizes and 4 Awards of the Largest Entrepreneurial Potential were elected. Translated by: Wang Jingnan Edited by: Kyle Muntz, Lang Cuicui
OPCFW_CODE
SageCRM Keyword Search Using the Keyword Search in Sage CRM, you can perform a search across all text fields on any of the main CRM entities by typing key terms in the search field. In this blog post, we will go over some tips and tricks to get the most out of the keyword search. The CRM keyword search function uses an “any words” search technique. An ‘any words’ search returns records containing all of the listed in a search term once those words appear in the record text fields or in the text fields or in the text fields of any associated entity record specified in the keyword search view. For example, a search for European software services returns all records containing the words European + software + services in any text field. These words can appear in any order within a record and across more than one text field. If the search term is not enclosed in quotation marks, matching records are picked up even where there are words inserted between the search term words within a record. If quotation marks are used, only records containing the exact phrases are returned. Here are a variety of search characters that can help you narrow your search results in such circumstances: |*%||A * or % can be placed at any position in a word and matches any number of characters. Both the * and the % perform the exact action. For example, *ope* would make Europe,open, and so on. Note: Leave a space between words when using these characters for a multi-search terms.| |“”||To search for a phrase, place in between quotation marks| |?||A ? can be placed at any position in a word and matches any single character. For example, Americ? Would match America but not America. However, you could use Americ?? To return all words using Americ and two characters after it.| |=||A = can be placed at any position in a word and matches any single digit. For example, B== would match B12, B09, B99 but not B123.| |#||The # must be placed at the start of a word and will return all words that start with the same letter and sound like the word you are searching for. For example, #smith will return smith, smithe, smythe. Note: The # can sometimes be over-inclusive, For example, in the example above, it might also return words like smart, smoke, smell.| |~~||The ~~ symbols allow you to search within a numeric range. To perform a numeric range search, you enter the upper and lower bounds of the range separated by ~~. For example, 10~~20 will return all numbers between 10 & 20. Note: Numeric range searches work with positive numbers only , and decimal points, and commas are treated as spaces, while minus signs are ignored.| |+-||Place + in front of a word that is required in your search or – in front of a word that you wish to exclude. For example, if you wanted to return all records containing the word ‘software’ in association with the word ‘services’, you could use the search term software + services. However, if you wanted to return all instances of ‘software’ except those that are associated with ‘services’ you could use the search term software –services.| We hope you will find some of these search characters useful and more importantly, help you with your Sage CRM practice. Article Courtesy of Sage
OPCFW_CODE
In short, Forge Networking is a free and open source multiplayer game (multi-user) networking system that has a very good integration with the Unity game engine. You wanna make a multiplayer game or real time multi-user application? This is the library for you. Welcome to the most innovative networking solution (which works with the Unity game engine). Forge's network bandwidth is unbeatable, its performance outclasses other solutions, its flexibility is in a league of its own, and, as always, it has no CCU limits whatsoever. Forge Networking is an open source networking solution built in C#. It is completely multi-threaded and was designed to work both inside and outside of the Unity Game Engine but mainly in conjunction with Unity. To get started, check out the links listed below in this readme. - Unity multiplayer games/applications - Unity independent applications - User hosted servers - Run servers on Windows, OSX, Raspberry Pi, and/or Linux - Developer hosted servers - MMO, RTS, FPS, MOBA, you name it, Forge does it - Master Servers, NAT Hole punching, Web Servers, Cache servers, all the servers! - Server to server communication, yup - Spin up servers on demand, check - Tons of other stuff that I won't list otherwise I will go on forever, yes indeed If you can name it, Forge most likely can do it :), it is built on some basic principles which makes any idea a possibility. Forge Networking is a networking solution built with the Unity game engine in mind. However, this is actually the 2nd version of Forge so it has some interesting properties. The first original Forge (classic) was built directly inside of Unity and was very tightly integrated with Unity. Once we learned everything we could from that version we opened up a blank Visual Studio project and began work on Forge Networking Remastered. Forge Networking Remastered was completely developed independently of Unity, it was tested and debugged in a C# project. Once it was working, we added Unity integration. This means that you can easily create native C# applications which run on Forge to support your games and applications such as Relay servers, NAT hole punching servers, chat servers, master servers, cache servers, websocket servers, you name it! Forge was developed in a way that makes it easy for you to serialize data in any way that you want. This allows you to make your project as secure as you want or as fast as you need. Use the links below to learn how to create your first project with Forge Remastered, and be sure to join our active Discord server to talk with other Forge users. Discord - Join us and the growing community, for talking about Forge Networking as well as just networking in general. Even if you don't exactly use Forge Networking in your project you can get a ton of insight from this community :) Ways to support the project - Make pull requests :D - Contribute to tutorials, documentation, and in the Discord community - GitHub Sponsorship This project exists thanks to all the people who contribute. [Contribute].
OPCFW_CODE
Splitting(); const verbs = document.getElementById('verb-list'); const menu = document.getElementById('verb-menu'); const menuToggle = document.getElementById('menu-toggle'); const activeClass = 'menu-active'; let menuItems = []; const setMenuToggleText = () => { if (!document.body.classList.contains(activeClass)) { menuToggle.textContent = 'View all'; } else { menuToggle.textContent = '× Close'; } } const handleMenuItemClick = (item) => { item.addEventListener('click', () => { document.body.classList.remove(activeClass); setMenuToggleText(); }); } const addMenuItemAction = (menu) => { [...menu.children].forEach(item => handleMenuItemClick(item)); } const setupHeadlineRefs = (headlines) => { [...headlines].forEach(headline => { headline.id = headline.textContent; headline.tabIndex = 0; menuItems.push(headline.textContent); }); }; const buildMenu = (menu) => { menuItems.forEach(item => { const el = document.createElement('li'); const html = `<a class="menu-item-link" href="#${item}">${item}</a>`; el.className = 'menu-item'; el.innerHTML = html; menu.appendChild(el); }); }; menuToggle.addEventListener('click', (e) => { document.body.classList.toggle(activeClass); setMenuToggleText(); }); setupHeadlineRefs(verbs.children); buildMenu(menu); addMenuItemAction(menu);
STACK_EDU
This is a meetup group for DevOps culture, system administrators, developers, CTO's, R&D's and generally technical people interested in DevOps topics. When we meet, its all about the discussion of latest technology, continuous improvement, cloud, life, beer, more technology and more technology. Sharing newest trends and past experiences is always on the menu. This meetup group is organized by ProdOps.io (https://www.prodops.io) as part of our contribution to the technical community in Israel. We also often speak and share our knowledge at these events. ProdOps.io (https://www.prodops.io) is an international company founded as DevOps IL in Tel Aviv, Israel that passionately provides guidance, designing and planning together with the hands-on implementation of solutions. We provide experienced software operations experts who work remotely and on-site with locations in Israel, UK, and the USA. We believe in building a world where companies can have access to an environment of growth and incredible opportunities. Helping achieve your companies objective of having a scalable, reliable and secure infrastructure to support your systems. We’re devoted to helping our clients kick-ass in business. What separates us from our competition is our experience-based processes involve thinking of every possible parameter in the way, so that our implemented products don’t only live and serve presently, they are resilient to changes and attacks, self-healing in case of need, robust and scalable to serve any amount of requests while quickly responding to changes in requirements. And they’re fast, so fast you won’t believe the timeline you used to work by. ProdOps is a worldwide team of digital experts who are driven by a passion for creating extraordinary production and software delivery. We actually do DevOps! Our mission is to create awesome experiences by embracing extraordinary challenges. We also invite you to join the Operations IsraelFacebook group http://on.fb.me/Ops-IL for discussion on DevOps. DEVOPSDAYS TEL AVIV IS IS BACK FOR ITS 7TH TIME RUNNING Call for Paper is OPEN until August 31st, so get submitting - https://devopsdaystlv.com Join leading speakers from around the globe, together with DevOps, SRE, platform, production and operations engineers for two full days of sessions & masterclasses focusing on best practices in operations and systems engineering to gear engineering teams for success. DevOpsDays is a volunteer-led event, by the community for the community – with talks contributed by some of the best from our local community as well as the global DevOps community, with a diversity of talks from beginner through advanced. Dr. Nicole Forsgren will be a keynote speaker at the event, and will (*crossing fingers*) do an Accelerate book signing. Ticket Sales will open towards the end of August - so stay tuned! Big thanks as always to our sponsors who make it happen - Elastic, BigPanda, Yotpo. Learn more about sponsorship opportunities here: https://devopsdaystlv.com/sponsor Feel free to suggest open spaces ideas in the comments.
OPCFW_CODE
import calendar import datetime import re from typing import Generator, Optional import dateparser from django.utils import timezone month_matcher = re.compile(r'^(?P<year>\d{4})-(?P<month>\d{1,2})(-\d{1,2})?$') counter_month_matcher = re.compile(r'^(?P<month>\w{3})-(?P<year>\d{2}(\d{2})?)$') def parse_month(text: str) -> Optional[datetime.date]: """ Return a date object representing the first day of a month specified as text in iso or iso-like format """ if not text: return None m = month_matcher.match(str(text).strip()) if m: year = int(m.group('year')) month = int(m.group('month')) try: return datetime.date(year, month, 1) except ValueError: return None return None def month_end(date: datetime.date) -> datetime.date: """ Returns the last day in a month :param date: :return: """ wd, last = calendar.monthrange(date.year, date.month) return datetime.date(date.year, date.month, last) def month_start(date: datetime.date) -> datetime.date: if isinstance(date, datetime.datetime): return date.replace(day=1).date() return date.replace(day=1) def this_month() -> datetime.date: # make sure that we are in current timezone # timezone.now() returns UTC return month_start(timezone.now().astimezone(timezone.get_current_timezone())) def next_month() -> datetime.date: return month_start(this_month() + datetime.timedelta(days=32)) def last_month() -> datetime.date: return month_start(this_month() - datetime.timedelta(days=1)) def date_range_from_params(params: dict) -> (datetime.date, datetime.date): """ Returns start and end dates based on data provided in the params dict. This dict will typically be the GET dict from a request :param params: :return: """ start_date = parse_month(params.get('start')) end_date = parse_month(params.get('end')) if end_date: end_date = month_end(end_date) return start_date, end_date def date_filter_from_params(params: dict, key_start='', str_date=False) -> dict: """ Returns dict with params suitable for the filter method of an object specifying dates as the are given in the params dict. This dict will typically be the GET dict from a request :param params: :param key_start: when give, the dict keys will have this value prepended :param str_date: when True, dates will be converted to strings :return: """ result = {} start_date = parse_month(params.get('start')) if start_date: if str_date: start_date = str(start_date) result[key_start + 'date__gte'] = start_date end_date = parse_month(params.get('end')) if end_date: end_date = month_end(end_date) if str_date: end_date = str(end_date) result[key_start + 'date__lte'] = end_date return result def parse_date(date_str: str) -> datetime.date: """Converts '2020-01-01' to date(2020, 1, 1) raises an exception if string is not in given format :param date_str: string in `YYYY-mm-dd` format """ return datetime.datetime.strptime(date_str, "%Y-%m-%d").replace(tzinfo=None).date() def parse_date_fuzzy(date_str: str) -> Optional[datetime.date]: """ Uses dateparser to try to parse a date. Uses only specific locales to make dateparser faster, so we extracted it as a function to use throughout the code """ dt = dateparser.parse(date_str, languages=['en']) return dt and dt.date() def months_in_range( start_month: datetime.date, end_month: datetime.date ) -> Generator[datetime.date, None, None]: end_month = month_start(end_month) current_month = month_start(start_month) yield current_month while current_month < end_month: # duplicate the FetchIntention current_month = month_start(current_month + datetime.timedelta(days=40)) yield current_month
STACK_EDU
As I gradually hone my translation skills through Gengo.com, I’m learning not just new words in Japanese, but more effective ways of researching word meanings and their proper translations in English. When translating in an environment where conveying the meaning of the original text accurately is critical, there are often cases where only one word or phrase would be appropriate. The translator must not just understand the original word or phrase in it’s entirety, but also understand the context it is used in and the best word or phrase to translate it to. I feel this is especially important for translating business-related texts. Let’s talk a specific example: the word 航空局 (koukuukyoku), which I recently came across. It didn’t show up in the dictionary I typically rely on (Dictionary Goo), and even if it did, dictionaries often contain aged expressions that are no longer in popular use. Fortunately, I already knew that 航空 means ‘aviation’ (for example 航空券 = plane ticket) and had a general feeling that 局 meant something like an organization. The dictionary gives ‘bureau’ for that word, so now we have a first guess at a translation: “aviation bureau”. A search for that word (in double quotes) with Google yields a bunch of pages, and the second hit actually discuses something called the “Japan Civil Aviation Bureau”. In that same sentence, the word 航空局 is even present., so now we have a potentially better translation. But, as you know sometimes Wikipedia isn’t correct, so let’s get a second opinion to validate this translation. Searching for ”航空局” on Google gives this official looking site, which happens to have a link titled “航空局の役割”. (役割, yakuwari – role,part). Clicking on this link gives a good description of what the role of the 航空局 is, but doesn’t have any English on the same page. The website does have an English version (the button is on the top right of the page), but clicking on the link starts back at the main page, which is laid out differently then the Japanese homepage. If it had the same visual layout, it would have been easy to just match up the terms between Japanese and English. A quick inspection of the English homepage shows the term “Civil Aviation Bureau”, but clicking on it brings to a page with significantly different content than the Japanese 航空局 page. However, looking at the two URLs, we can see that “/koku/” is in both (probably a shortening of “航空”, and this is further evidence that “航空局” is the “Civil Aviation Bureau”. The only question remaining here is whether we use “Japan” at the beginning of the term or leave it out. While specifying “Japan” could be safer (since omitting that term could imply some other country’s bureau), if the person reading the translated English text was in Japan, or knew the text was from Japan, they might find it slightly unnatural. Ideally, if this is a paid job you should talk to the customer to verify things like this, but that doesn’t always work out because of deadlines and the fact the customer may not respond in time. Assuming we can’t rely on the customer, I would probably go without using “Japan” in the term’s translation, since that part is probably obvious to the person reading this. Also, I trust the official website I mentioned more than Wikipedia. If you still aren’t convinced, you could do more followup searches to further validate your translation. Ideally, if you know the name of the customer who posted the translation job, you can look on their website and see if they happen to have any similar texts translated in English. If you find a term which seems to be used to mean 航空局, then it’s probably best to use that in your translation as well, even if it conflicts with other sources. The hardest part is being able to do all this quickly. After all, we’ve done all this work just for a single word (:
OPCFW_CODE
Who uses QUnit A page akin to https://gruntjs.com/who-uses-grunt to see which projects are using QUnit, with links to examples of their integration to learn from and peek into. Using this gather a dozen or so to start with. https://fluidproject.org/ Fluid is an open, collaborative project to improve the user experience and inclusiveness of open source software. Their Infusion uses QUnit throughout using an integration layer called node-jqunit. /cc @jobara Hi, I noticed you are an active developer on the project. Do you think it would be okay for us to mention the Fluid project with a link on our website? I noticed the project currently uses an embedded copy of QUnit 1.12. Do let us know if you'd like any help upgrading (or chat with us). Always happy to help! @Krinkle should be okay to mention our project on your site. Thanks for asking. Regarding the embedded version of QUnit, we'll try to write up some info about why we are using that version and share it back with you. Hi @Krinkle - our integration with the ancient QUnit made a few patches to deal with some test framework startup races caused by the old setTimeout(xxxx, 13) system. It's very likely that these are resolved or at least behave differently in newer QUnits but when I poked around a few years ago I couldn't easily find the equivalent sites to inspect for fixes, so it seems the logic was substantially reworked after 1.x. We have an in-house testing framework that is fairly tightly coupled to the internals of the old QUnit. It would be nice to upgrade but would probably need a fair amount of dedicated time from people familiar with the internals of both systems - we hope to get around to it someday. Thanks for the note and the offer of help! We're happy to be cited as users happily benefitting from the system but I imagine our integration is not something we would want others to learn from. https://doc.wikimedia.org/ Wikimedia Foundation, the non-profit behind Wikipedia and many other free knowledge and open source projects. Uses QUnit for the featured VisualEditor (examples) and MediaWiki (example) projects, as well as in most other projects. https://wordpress.org/ (test suite) WordPress is open source software you can use to create a beautiful website, blog, or app. As mentioned on the site already, many Ember projects use QUnit. So probably lots of potential examples from that community. @smcclure15 Hi - I was looking for a MathWorks project to list as example but couldnt't find any. Perhaps I'm looking it the wrong place? All of MathWorks' src/test is within internal SCM systems and private. I can say that we have on the order of 5K test files amounting to ~36K QUnit testpoints. I can get you other stats/patterns as needed, but I need to respect confidentiality. @smcclure15 Nice, those are some cool stats! I do see a lots of open-source projects by MathWorks on GitHub under @mathworks, @mathworks-robotics, @mathworks-simbiology etc., but couldn't find any frontend or Node projects that use QUnit. Perhaps in the future :) Feel free to submit a patch here anytime, I'd love to feature a MathWorks open-source project that utilises QUnit.
GITHUB_ARCHIVE
PHP is Good, Actually Okay, haters. You know who you are. I work with a bunch of y'all 😛 I'm here to explain why I, a professional web software developer, in the year 2022, continue to defend PHP. PHP is easy to learn What do you need to learn in order to build and deploy a simple PHP application? - FTP or Git - Basic programming concepts Contrast with what you need to learn in order to build a simple React application: - npm / yarn / pnpm / some other package manager - Webpack / Parcel / Vite / some other build tool - Production build / deployment pipeline - Basic programming concepts - and probably a little CORS PHP is an excellent on-ramp for the web development journey. You can start with basic templating and then add complexity as needed and desired. You can get your first site up really, really fast, which is such a thrill when you are learning something completely new. PHP is flexible You can just use PHP as a templating language. You can expand and just do procedural stuff, like taking some data and looping through it to echo rows into a table, or get fancier with object-oriented application logic. You can write pure functions or wallow in side effects. And it all plays well together, so you can build, learn, iterate, and update without having to overhaul everything. As a playground, it's fantastic. If you just want something that works, it's excellent. And if you want to be able to pick the best paradigm for each part of your application while keeping it all in the same codebase? Knock yourself out. PHP is mature PHP has been around for a while. If you keep writing code the same way you did twenty years ago, it will keep working. There are new things, new ideas, new features, new methods, but they're just recommended, not breaking changes. If you want to take the time to learn something deeply, over months or years, without the whole landscape shifting underfoot, PHP is a good option. And with that maturity comes a lot of other benefits: excellent tools, cheap hosting, extensive documentation, bottomless examples that work now, and will likely continue to work for the foreseeable future. PHP has simple tooling Some days, this is my number one sales pitch for PHP: you don't have to compile, transpile, or build. The file structure on your development machine just needs to get copied as-is up to the server, and everything will just work. There are few things that can go wrong, and most of those are just ensuring that your local machine is running with the same PHP settings as the server. Once you're set up, you will probably never have to think about it again. PHP is a good tool for many common jobs There are jobs that I wouldn't pick PHP for. Like, my day job, building enterprise apps with complex business rules, mountains of data, and a rotating cast of developers moving from project to project? Nope, not a good fit for PHP. We need the strict, complex type checking you get with something like C# or Typescript. That alone is a deal-breaker. But most of the internet is not big, complicated applications. Most of it is blogs and brochures and menus and meet-the-team pages. It's wikis and sharing stupid gifs and simple CRUD applications. PHP is excellent for that stuff.
OPCFW_CODE
Vibrant discussion about CSLA .NET and using the framework to build great business applications. I'm hoping someone can point me in the right direction. I have a telerik grid (under ASP.NET) configured on my page setup along with a CslaDataSource configured to my BusinessListBase derived class. This has one level of objects in a collection of BusinessBase objects- nothing fancy here. When I click the "Add new item" item, I would hope that a new BusinessObject would get created and bound to for the initial value display. I am hoping that I have the opportunity to load default values in the DataPortal. I am not seeing anything that sticks out - I tried overriding the AddNewCore() method on the collection to see if the grid was making a call but nothing came through. I'm guessing I'm probably missing something obvious here. Web data binding doesn't normally end up invoking AddNewCore(), because they don't use IBindingList. Do you know what postback event occurs when you click "Add new item"? That is most likely where you'll need to put a bit of code to ask your collection for a new item - and you could call IBindingList yourself if that seems easiest - or implement some other public method on your collection that creates/adds a new item. This makes perfect sense - an AddNew-style event is not readily apparent on the telerik control so I am pursuing this with telerik. Once I have an approach, I'll post it here in case others are looking for the same behavior. The telerik article that deals with this is located here: http://www.telerik.com/help/aspnet/grid/grdInsertingValuesInPlaceAndEditForms.html If that link isn't available down the road, here is the general idea with Telerik: There are two events on the grid that you can hook - InsertCommand and ItemCommand. The easiest command to hook is ItemCreated because it's far less noisy than ItemCommand. In InsertCommand, you can test to see if the item in question is being inserted (as opposed to editing a pre-existing object) and assign a new business object to it. If you have an inline or form based edit, you will get this command fired once for each element being rendered. If you are using a custom control, you will see this event fired once for the control. If you are using a custom control, you must float the "public object DataItem" property on your control to get the data object assigned for binding purposes. The code in the ItemCreated event looks something like this: if( e.Item.OwnerTableView.IsItemInserted ) e.Item.DataItem = MyBusinessObject.CreateNewObject(...); You'll then need to adjust your CslaDataSource_UpdateObject(...) command to behave appropriately (insert vs. update). I need to correct - You do not need to modify UpdateObject event on your datasource to handle Inserts and Updates. The InsertObject event will be properly raised. Also, the telerik Grid methods are ItemCreated and ItemCommand - as opposed to InsertCommand as I incorrectly stated earlier.
OPCFW_CODE
Here's my set up: Desktop with Windows XP SP3 ihas424B flashed correctly with C4eva's ihas cfw external enclosure connected via usb Verbatim 003 (possibly #96577, i'm not sure, but i do remember buying them off newegg.com on sale $50 for 50 disks) i can't get consistent quality burns. I usually get a solid spike at the layer break (approx PI Max 200+) but the averages across the board seem legit. I've also had times where the averages are relatively high. I've tried God knows how many setting changes and wasted God knows how many discs and still haven't found a solid setting that works. Actually at one time imgburn was having a hard time verifying the image so i began burning in safe mode. i would clear opc and cycle the drive before every burn, i had FHT/OHT/SB/perform OPC on Burn Proof/OS off Burned at 4x with 512MB buffer and verified image and i had about a stretch of 12 good burns. Then following the same procedure i got a stretch of about 3 or 4 bad burns. Now i've tried the settings outlined by lightning uk: FHT/SB/ = On OHT/OPC/OS = off (BP on and off) cycle the drive once before changing settings. i've received a stretch of 4-5 bad burns stated above. 1. What could the problem be? 2. Do i literally have to burn through 50 discs before i find a setting group that works? 3. Once i find a setting group that works, will it always work? 4. Is it a bad spindle and is that really possible? 5. What's the correct model # of Verbatim's to purchase and will they work or will i have the same scenario play out again? 6. Is it the drive itself and how could i tell? 7. Is there a certain model # of ihas drives that perform more consistently than others? 8. I don't see many people using the same ihas drive as i do, could that be an indication that i should go with a different drive? 1. What's the deal about clearing opc and cycling the drive? There's so many different opinions and it get's confusing. I've read that we should only cycle once before the setting changes, but why did i have success with 12 burns by clearing the opc before each burn? 2. What happens if you get a bad burn, do you continue on with the same settings? If so, how do you know when to try a different group of settings? 3. What's causing the inconsistency, wrong settings, wrong set up (IE: external enclosure), too many system processes, bad disks, or bad drive? How do you narrow it down? 4. I would assume that a person would pick a group of settings, burn and get good results, but the more i watch i feel like it has less to do with settings, because the burns on many tests seem to improve after each consecutive burn, so it leads me to believe that there's no a magic setting that will automatically produce a good disk, am i correct? I hope i've explained my situation and my mind set and look forward to hearing back from lightning uk on this topic. Thank you! Edited by thedon01, 15 March 2012 - 04:03 PM.
OPCFW_CODE
This week has been without a set direction. I thought I would be working on first class gpu functions and the like but I have had a bunch of bug reports from users, so they went to the top of the pile. Here's a smattering of the goings on: My compiler produced working code, but it used a very lazy method to make up glsl names from lisp ones. Basically it appended the name to the namespace changed periods and hyphens to underscores..and not much more. This made for huge ugly names. I simply added a name mapping pool to the compile pass and now the names are tiny. I also added indentation. Those two things alone made a massive difference. I thought this was a no brainer. Before I had glsl-spec I had some crazy type hacks in my compiler so we could use the pseudo-generic types used in the official glsl spec. Now we have a better source of data I could remove these old hacks. Turned out that broke more than I expected :D It revealed the problem in the next section and that one of the name-hacks I removed had been accidentally fixing up the bad data. Coding it weird. Super embarrassing, I somehow had all the casing of the glsl variables incorrect. Fixed that Can now set a swizzle of a vector (setf (s~ (v! 1 2 3 4) :yz) (v! 10 10)) -> (v! 1 10 10 4) I have a ticket to add support for this to my glsl compiler When compiling gpu pipelines my lisp compiler was telling me that there was some ambiguity in the types of data being used and that it was having to use some more generic (and costly) operations. I found what it was and changed the types of the data to make sure things are kosher. Compiler is happy now More than most, this week has reminded me that I need tests. I looked at one of the more popular testing frameworks for lisp, liked it and have set up Varjo to use it. I really want to look into generative tests using this thing. Spitting out random (but supposedly valid) programs and throwing them at the compiler sounds cool. I need to do the same for CEPL where we actually use GL to make sure the shit we generate really compiles. Not sure of the cleanest way to set this up yet. Lots of issues filed, problems identified, and even a couple of things fixed. But all fair to detail'y and boring to write about. It's nanowrimo this month and my partner I usually do this. He actually works on a novel though whereas I pick a feature/project that I want to focus on and hammer on that. I'll hopefully dig into details on that next week.
OPCFW_CODE
Access SMB over a custom port I have the fileserver hosted on AWS. I was using smb port 445 to access the fileserver. I found out that some ISPs have blocked port 445, so I set up my fileserver instance behind AWS network load balancer and create a custom listener port on NLB and forward the request on this port to fileserver instance on 445. But windows share uses port 445 by default, so is there a way to make a request to nlb on a custom port and indirectly to my fileserver since I have a forward rule on NLB. Is this possible? Note: I know that there is no syntax for alternate ports on windows share. I am looking for a workaround You could do port forwarding with a firewall, e.g. forward port public port 4455 to 445 internally I can give the public IP of my NLB to windows share. It will access NLB on 445(this is by default and can't be changed) but you are saying that I can do port forwarding on my firewall. So now the firewall will change the request to NLB:4455. I have listener on nlb at 4455 and this will forward the request to fileserver on 445,right? I don't know what your topology looks like but if your firewall is in front of your NLB, you can set up the port forwarding there so that your firewall will redirect the requests from 4455 public to port 445 on your NLB and that NLB will forward the traffic again to port 445 of the hosts. Does this answer your question? what's the syntax for Accessing smb/windows shares via alternative ports? @Massimo it doesn't. I am looking for a workaround. @Ali I know, but there is also no support on the client side; even if you place a load balancer performing a port forwarding in front of the file server, there is no way to tell the client to connect to a different port. SMB is generally not a protocol used over the internet. You should set up a VPN between your client (laptop?) and AWS and tunnel SMB through the VPN. Then you won’t have to worry about ISPs blocking port 445, won’t need the AWS fileserver open to the world and also will have an extra layer of security. There are many options for VPNs, both open source and commercial. Hope that helps :) Make a portproxy on Windows with a virtual adapter (loopback) to redirect port View step-by-step (in pt-BR but with images by step) https://apolonioserafim.blogspot.com/2021/05/acessar-servidor-samba-em-porta.html Transposed: ACCESS SAMBA SERVER IN CUSTOM PORT / REDIRECT PORT - IPV4 PROXY PORT WITH NETSH If you need to access SMB 445 with a custom port Run the wizard to add Hardware (Windows + R) hdwwiz.exe Select "Install the hardware manually from a list (advanced)" then click Next to continue In the list select "Network adapters" then click Next to continue In the list on the left side select Microsoft In the list on the right hand side select Microsoft Loopback Adapter then click Next to continue At the end click Next and the interface will be installed To rename, go in settings and select Internet Protocol Version 4 (TCP/IPv4) Then click Properties Set any IP, just not the same IP range as your current network to avoid any IP conflict (in this example, used <IP_ADDRESS>) Open PowerShell or CMD as administrator and enter the command below netsh interface portproxy add v4tov4 listenaddress=<IP_ADDRESS> listenport=445 connectaddress=smb.example.com connectport=44518 Where: listenaddress - is the address defined in the previous steps, in this example <IP_ADDRESS> listenport - the original samba port (445, 137, 138, 139) usually just 445 will solve connectaddress - the remote address that will be made the proxy, can be a DNS name or an IPv4 address n.n.n.n connectport - the customized service port, in the example, 44518 Reboot computer This should now allow local connections to <IP_ADDRESS> port 445 to be directed to smb.example.com port 44518 Link-only answers rot over time, several of mine have. Please add some content to this answer so it can stand alone in case the link-target goes away. We have 12 year old ServerFault questions on here, so you need to plan for that sort of longevity. This is now (2024) posible! https://techcommunity.microsoft.com/t5/storage-at-microsoft/smb-alternative-ports-now-supported-in-windows-insider/ba-p/3974509 Map an alternative port with NET USE To map an alternative TCP port using NET USE, use the following syntax: NET USE \\server\share /TCPPORT:<some port between 0 and 65536> NET USE \\server\share /QUICPORT:<some port between 0 and 65536> NET USE \\server\share /RDMAPORT:<some port between 0 and 65536> For example, to map the G: drive port to TCP/847, use: NET USE G: \\waukeganfs1.contoso.com\share /TCPPORT:847 Map an alternative port with New-SmbMapping To map an alternative TCP port using New-SmbMapping PowerShell, use the following syntax: New-SmbMapping -RemotePath \\server\share -TcpPort <some port between 0 and 65536> New-SmbMapping -RemotePath \\server\share -QuicPort <some port between 0 and 65536> New-SmbMapping -RemotePath \\server\share -RdmaPort <some port between 0 and 65536> For example, to map the G: drive port to TCP/847, use: New-SmbMapping -LocalPath G -RemotePath \\waukeganfs1.contoso.com\share -TcpPort 847 Unfortunately, adding port numbers to UNC paths is not yet supported. This means that you have to mount them with the net command before you can access them.
STACK_EXCHANGE
University of New Mexico: UI/UX Designer at University of New Mexico (Albuquerque, NM) (Albuquerque, NM) Posted: Sep 23, 2020 Project ECHO has an exciting opportunity for a UI/UX Designer to join our ECHO Digital Team and design innovative technology solutions for our growing movement of 400 partners in 40 countries. Project ECHO exists to democratize knowledge—linking experts at centralized institutions with regional, local, and community-based workforces. We are experiencing an exponential growth and aim to touch the lives of 1 billion people by 2025. We recognize the importance of user centered design in realizing that ambitious goal. We are looking to expand our UI/UX research and design team with an experienced UI/UX Designer who will help us design our technology platform, ECHO Digital. In this role, Project ECHO will allow you to use your skills and experience to make a significant impact in improving lives around the world. As a UI/UX Designer, under direction and guidance of the Lead UI/UX Designer, you will collaborate with our global team of product managers, engineers, designers, and researchers in a fast-paced and rapidly changing environment to make ECHO Digital both functional and appealing – with a keen mind for intuitive and accessible experiences and a keen eye for beautiful and highly-usable interfaces. If you are passionate about collaboration, design, and innovation, and want to make a lasting, positive impact on millions of lives around the world, this position is for you. Your role will focus on: - Developing an understanding of Project ECHO's diverse community of users, identifying their challenges, frictions and opportunities, and proposing and devising elegant solutions to add value to their ECHO experience. - Storytelling and framing problems for a diverse audience of stakeholders, helping them envision ECHO Digital solutions and how those solutions would benefit users within the ECHO community. - Designing and delivering user journeys, user flows, wireframes, storyboards, process flows, sitemaps, mockups, and interactive prototypes optimized for a wide range of devices and interfaces. - Designing graphic user interfaces informed by a strong sense of layout, informational hierarchy, typography, and color, as well as contributing to the development of ECHO Digital's design system and style standards. - Understanding inclusive and accessible design principles and best practices, as well localization, internationalization and cultural relevancy, in order to make them intrinsic to the design of ECHO Digital's experiences and interfaces. - Rapidly testing and iterating your designs with peers, stakeholders, and the ECHO community. - Communicating effectively in a cross-functional product development team. - Asking smart questions, taking risks, and championing new ideas. Project ECHO is a telemedicine and distance-learning program with partners all over the world, and it prides itself on being a values-based organization. Our seven values include: Service to the Underserved, Demonopolize Knowledge, Mutual Trust and Respect, Teamwork, Excellence and Accountability, Innovation and Learning and Joy of Work. We strive to find individuals who can embrace and exemplify these values. ECHO Digital is a globally federated, multi-lingual platform that will be used by millions of users across the world. Using open APIs, we aim to integrate with government systems and national health registries to create systems change. The platform will enable a wide variety of use cases such as tracking and managing learning sessions and attendances, communicating and sharing between online communities, searching and discovery of experts and resources, and much more. Project ECHO is growing exponentially, serving communities across the globe. By joining our team, you will be part of a movement that is leading the way in technology-based healthcare initiatives. Join our team and be a part of our goal to touch the lives of 1 billion people by 2025!
OPCFW_CODE
I'm a postdoctoral researcher in the Department of Linguistics at Ohio State University. My research interests are in phonetics, psycholinguistics, and laboratory phonology. I study how predictability and communication lead to conventionalized morpho-phonological patterns, and more generally how prediction shapes spoken words and linguistic systems. My phonetics research focuses on the acoustics of voicing contrasts. Turnbull, R., Seyfarth, S., Hume, E., & Jaeger, T. (2018). Nasal place assimilation trades off inferrability of both target and trigger words. Laboratory Phonology, 9(1):15, 1-27. [ doi ] [ paper (pdf) ] Case, J., Seyfarth, S., & Levi, S. (2018). Does implicit voice learning improve spoken language processing? Implications for clinical practice. Journal of Speech, Language, and Hearing Research, 61, 1251-1260. [ doi ] Seyfarth, S., Garellek, M., Gillingham, G., Ackerman, F., & Malouf, R. (2018). Acoustic differences in morphologically-distinct homophones. Language, Cognition and Neuroscience, 33(1), 32-49. [ doi ] [ paper (pdf) ] [ stimuli (zip) ] Garellek, M. & Seyfarth, S. (2016). Acoustic differences between English /t/ glottalization and phrasal creak. Proceedings of Interspeech 2016. [ paper (pdf) ] Seyfarth, S. & Garellek, M. (2015). Coda glottalization in American English. Proceedings of the 18th International Congress of Phonetic Sciences. [ paper (pdf) ] Malouf, R., Ackerman, F., & Seyfarth, S. (2015). Explaining the number hierarchy. Proceedings of the 37th Annual Conference of the Cognitive Science Society. [ paper (pdf) ] Seyfarth, S. & Myslin, M. (2014). Discriminative learning predicts human recognition of English blend sources. Proceedings of the 36th Annual Conference of the Cognitive Science Society. [ paper (pdf) ] Seyfarth, S., Ackerman, F., & Malouf, R. (2014). Implicative organization facilitates morphological learning. Proceedings of the 40th Annual Meeting of the Berkeley Linguistics Society. [ abstract (pdf) ] [ paper (pdf) ] Rohde, H., Seyfarth, S., Clark, B., Jäger, G., & Kaufmann, S. (2012). Communicating with cost-based implicature: a game-theoretic approach to ambiguity. Proceedings of the 16th Workshop on the Semantics and Pragmatics of Dialogue. [ paper (pdf) ]
OPCFW_CODE
# coding: utf-8 # In[31]: from __future__ import division import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatches # P1 Tarea 6 class Crank_Nicolson(object): def __init__(self, condiciones_borde, condiciones_iniciales): '''Inicializa la clase con los valores de las condiciones de borde, condiciones iniciales, mu, gamma, etc.''' self.cb = condiciones_borde # n(x=0,t)=1 y n(x=1,t)=0 self.ci = condiciones_iniciales # n(x,0)=exp(-x**2/0.1) self.mu = 1.5 self.gamma = 0.001 self.t_actual = 0 self.numinterx = 499 self.numptosx = 500 self.dx = 1/self.numinterx # matrices tridiagonales def calcula_d(self, n, dt, r, pregunta): d = np.zeros(self.numptosx) if pregunta == 1: for i in range(1, self.numptosx - 1): d[i] = r * n[i+1] + (1-2*r) * n[i] + r * n[i-1] + dt*(n[i] - n[i]**2) else: for i in range(1, self.numptosx - 1): d[i] = r * n[i+1] + (1-2*r) * n[i] + r * n[i-1] + dt*(n[i] - n[i]**3) return d def calcula_alpha_y_beta(self, n, dt, r, pregunta): d = self.calcula_d(n, dt, r, pregunta) alpha = np.zeros(self.numptosx) beta = np.zeros(self.numptosx) Aplus = -1 * r Acero = (1 + 2 * r) Aminus = -1 * r if pregunta == 1: alpha[0] = 0 beta[0] = 1 # viene de n(t, 0) = 1, n(t, 1) = 0 else: alpha[0] = 0 beta[0] = 0 for i in range(1, self.numptosx - 1): alpha[i] = -Aplus / (Acero + Aminus*alpha[i-1]) beta[i] = (d[i] - Aminus*beta[i-1]) / (Aminus*alpha[i-1] + Acero) return alpha, beta def avanza_CN(self, n, nnext, dt, r, pregunta): # recibe arreglos n, nnext, alpha y beta '''Retorna el valor de la densidad de la especie para un paso de tiempo dt, por el metodo de Crank-Nicolson''' alpha, beta = self.calcula_alpha_y_beta(n, dt, r, pregunta) nnext[0], nnext[499] = self.cb for i in range(1, self.numptosx - 1): nnext[i] = alpha[i] * nnext[i+1] + beta[i] return nnext def condicion_inicial_y_borde(condiciones, numptosx, pregunta, arreglox=None): if pregunta == 1: ci = np.zeros(len(arreglox)) for i in range(len(arreglox)): ci[i] = np.exp((-arreglox[i]**2)/0.1) ci[0], ci[len(arreglox) - 1] = condiciones # condiciones de borde else: ci = np.random.uniform(low=-0.3, high=0.3, size=numptosx) return ci ''' # Main Setup P1 numptosx = 500 numptost = 5/dt + 1 xi = np.arange(0, 1, 1/(numptosx - 1)) ci = condicion_inicial_y_borde([1, 0], numptosx, 1, xi) # n(0,x) Fisher = Crank_Nicolson([1, 0], ci) dt = 0.25 * (Fisher.mu/Fisher.gamma) * (Fisher.dx**2) r = dt*Fisher.gamma/(2*(Fisher.dx**2)*Fisher.mu) # solucion parte difusion n = ci nnext = np.zeros(numptosx) N = np.zeros((numptosx, int(numptost))) N[:, 0] = ci for i in range(1, int(numptost)): nnext = Fisher.avanza_CN(n, nnext, dt, r, 1) n = nnext N[:, i] = nnext # Graficos x = np.linspace(0, 1, numptosx) fig1 = plt.figure(1) fig1.clf() for j in range(0, len(N[0]), 100): # Grafica menos curvas plt.plot(x, N[:, j], 'r-') plt.plot(x, N[:, 0], 'b-') # Grafica condicion inicial plt.title(r'Solucion Ecuacion Fisher-KPP ($n(x,t)$) \ para distintos valores de tiempo') red_patch = mpatches.Patch(color='red', label=r'$n(x,t^*); \ t^* \ fijo $') blue_patch = mpatches.Patch(color='blue', label=r'$n(x,t=0)$') plt.legend(handles=[red_patch, blue_patch]) plt.xlabel('Posicion (unidades)') plt.ylabel(r'Solucion $n(x,t)$ (unidades)') fig1.savefig('fisherkpp') plt.grid(True) plt.show() ''' # P2 Tarea 6 # Main setup P2 numptosx = 500 np.random.seed(100) ci = condicion_inicial_y_borde([0, 0], numptosx, 2) Fisher = Crank_Nicolson([0, 0], ci) dt = 0.25 * (Fisher.mu/Fisher.gamma) * (Fisher.dx**2) numptost = 5/dt + 1 r = dt*Fisher.gamma/(2*(Fisher.dx**2)*Fisher.mu) # solucion parte difusion n = ci n[0] = 0 n[499] = 0 nnext = np.zeros(numptosx) M = np.zeros((numptosx, int(numptost))) M[:, 0] = ci for i in range(1, int(numptost)): nnext = Fisher.avanza_CN(n, nnext, dt, r, 2) n = nnext M[:, i] = nnext # Graficos x = np.linspace(0, 1, numptosx) fig2 = plt.figure(2) fig2.clf() for j in range(0, len(M[0]), 100): # Grafica menos curvas plt.plot(x, M[:, j], 'r-') plt.plot(x, M[:, 0], 'b-') # Grafica condicion inicial plt.title(r'Solucion Ecuacion Newell-Whitehead-Segel \ ($n(x,t)$) para distintos valores de tiempo ($seed=100$)') red_patch = mpatches.Patch(color='red', label=r'$n(x,t^*); \ t^* \ fijo$ ') blue_patch = mpatches.Patch(color='blue', label=r'$n(x,t=0)$') plt.legend(handles=[red_patch, blue_patch]) plt.xlabel('Posicion (unidades)') plt.ylabel(r'Solucion $n(x,t)$ (unidades)') fig2.savefig('newell') plt.grid(True) plt.show() # In[ ]:
STACK_EDU
Incorrect exception message shown for wrong password error Hi, I'm using latest version v2.0.3 and I am getting incorrect exception messages when trying to unzip password-protected zip files created with 7zip using ZipCrypto enctyption method. Steps to reproduce: Create a zip file test.txt using 7Zip, provide a password and choose Archive Format: zip, Encryption method: ZipCrypto. Run sample code: @Test public void test() throws Exception { ZipFile zipFile = new ZipFile("test-ZipCrypto.zip", "blah".toCharArray()); for (FileHeader h:zipFile.getFileHeaders()) { try { zipFile.extractFile(h, "d:\temp"); } catch (Exception e) { System.out.println(e.getMessage()); } } } The sample code above prints: java.io.IOException: Reached end of entry, but crc verification failed for test.txt I experienced the same bug, but in a slightly different context. The bug has the following reason: ZipInputStream.java:getNextEntry() converts a ZipException into an IOException. Moving up the stacktrace, UnzipUtil.java:createZipInputStream() converts this IOException back into a ZipException. The problem with this Exception conversion is that the "type" of the ZipException gets lost. More specifically: The type "WRONG_PASSWORD" gets converted into the type "UNKNOWN". This makes it hard or impossible to correctly identify an invalid password (without applying further workarounds). I suggest the following changes: Do not convert ZipException into IOException. Instead, use ZipException for the whole stacktrace. Add a unit test that uses an invalid password when attempting to open a ZipInputStream. Nevertheless, thank you @srikanth-lingala for this excellent library. I just released the first version of the Android app "Secure Zip Notes" (which is still rather feature-limited): https://github.com/fkirc/secure-zip-notes https://play.google.com/store/apps/details?id=com.ditronic.securezipnotes If our further plans with this app turn out to be successful, then I will make sure that you receive an adequate compensation. In the meantime, you can workaround this bug by checking the exception message: try { final ZipInputStream is = zipFile.getInputStream(fileHeader); ... } catch (ZipException e) { if (e.getMessage().contains("Wrong Password")) { ... } } @AndrejsAlsevskis The problem with Standard Zip Encryption is that there is no straight forward way to detect wrong password. At least I wasn't able to find it. The only way I know of is to try to extract the content and compare the crc after extraction. If crc comparison fails, this most likely means a wrong password. This however is not a confirmed way of saying that the password is wrong (it can be that the crc was wrongly calculated when zip file was created), but it most likely means a wrong password. AES encryption on the other hand provides a way to check for the password upfront, so a wrong password "type" in this case can be relied upon. However, I think at the moment, zip4j does not set the exception type as wrong_password in this case. This type should be set. I will include it in the next release. @fkirc I see what you mean by wrong password type getting converted to unknown type. I will look into this and fix it in the next release. And thanks for the pointer about a unit test for it. I am surprised that there is no test case already. I had to release this on a tight schedule and probably lost track of it. @srikanth-lingala Many thanks for your response. I understand that detection of incorrect password for ZipCrypto is tricky. Perhaps following StackOverflow thread could be helpful: https://stackoverflow.com/questions/1651442/implementation-of-zipcrypto-zip-2-0-encryption-in-java Look for a code with a following comment. // PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is // used on versions after 2.0. This can be used to test if the password // supplied is correct or not. Cheers, Andrejs This comment has been taken out of the official zip specification. I have read this already as part of the zip specification. Unfortunately, it did not help me much. I experimented a bit but I could not figure out a solution for it. What I also noticed a few years back is that, some other famous tools (like Winzip, 7Zip, etc) also extract the whole file before showing user a wrong password message dialog. This happened with those tools only for ZipCrypto and not AES. In case of AES, those tools had a wrong password dialog almost immediately, but with ZipCrypto it took a lot of processing (depending on the file size in the zip) before this dialog showed up, which made me believe this was the only way to figure out a wrong password for ZipCrypto. I could very well be wrong, and there probably is a way to figure it out. If so, I would highly appreciate if someone can point me in the right direction, except for the specification document, which is, unfortunately, not really helpful. Fixed in v2.1.0
GITHUB_ARCHIVE
While I was away over the weekend, I borked my iphone. I was running IOS 4.0, and I was having trouble with network connections. - The statusbar wouldn’t reflect my actual connection status. WiFi connections would rarely show up in the statusbar, though I knew I was surfing wirelessly (my router at 10.10.10.1 was accessible to the phone, for starters…) - WiFi would periodically crash. In the middle of a good network connection, it would just. stop. working. I could go to System Preferences and see that WiFi was inexplicably disabled. If I tried to enable it, the switch animation would work, but no networks would be visible. Exiting and re-entering System Preferences would show WiFi disabled again. - Periodically all networking would crap out. The statusbar would show a good 3G connection, and ifconfig would show an IP… but the phone couldn’t establish any connections, anywhere. Only rebooting ever solved any of the above problems. So this weekend I started to look into it, and it occurred to me that maybe SBSettings was interfering with networking - I periodically disable or enable Wifi in SBSettings, and it occurred to me that maybe there was a conflict there. So I uninstalled SBSettings, (noting that it unnecessarily included iphone zsrelay in the uninstall)… And my phone stopped getting signal. I had never seen anything like it before. It didn’t show an operator, just 0 signal bars. It could get wifi, but couldn’t access the Internet over it. In retrospect, I think the SBSettings uninstall borked my yellowsn0w unlock, and at the same time my networking subsystem in general wasn’t working. The solution was clear - I had to reinstall. I figured I would do an update at the same time, just in case the networking issue was related. So I spent this evening painfully updating to 4.1.2. For anyone who encounters the same kind of pain I did: So here are the steps I had to follow, for reference: - Don’t bother with the latest Apple firmware. Firmware 4.2.1 is not easily crackable for the iphone 3gs yet, and besides, the most common way to crack is to install the baseband from the iPad (06.15), which brings all sorts of problems, including broken GPS and disappearing battery charge…. deal breakers for most people IMO. - But when you try to install a non-current version of the firmware, iTunes won’t let you! It phones home to the iTunes update servers to make sure that you are restoring to the most recent version. If it hears back that there is a MORE RECENT version than what you are trying to use, it will throw a message: This device is not eligible for the requested build - If you use the older hack, and change your /etc/hosts so that gs.apple.com points to 127.0.0.1 (your own computer), it will complain that the update server is not responding, and refuse to go forward. This check involves an exchange of an encrypted key, which uses your unique iPhone’s ID as one variable and Apple’s passphrase as another. Saurik (of Cydia fame) set up a server (note: really awesome post, recommended reading) to keep a backup of examples of successful SHSH key “handshakes” for each device, so you have to use the IP of this server instead. - Download the 4.1.2 .ipsw for your iphone. I found it on binsearch by searching iPhone2,1_4.1_8B117_Restore.ipsw . - Download PwnageTool - get it from the torrent or the links on the official devteam blog site only! Everyone else is at best a scammer, or at worst a hack stealing the product of devteam’s hard work. - Use pwnagetool to build yourself a custom version of the 4.1.2 firmware you just downloaded. - To foil Apple’s update server check, add a line to etc/hosts pointing gs.apple.com to 22.214.171.124 , Saurik’s SHSH backup IP as of this writing. If you have used Cydia before (and clicked the “make my life easier” button), this will be enough to get the restore going. If not, even a failed check of your validity for the firmware involves a key exchange, and having this IP in /etc/hosts will make sure that the SHSH key gets uploaded to Saurik’s server. Give it about a day, and try again. Did I remember to link to Saurik’s really interesting blog post about this? It explains a lot. If that doesn’t help, try the TinyUmbrella tool. - When your custom .ipsw is written, put your iphone into DFU mode, open iTunes, and OPTION-click (for Macs) on the Restore link to select your custom firmware. - When your phone is restored and working, use Cydia to install ultrasn0w (the unlock).
OPCFW_CODE
Would you like to try win some cash? |Available on Mobile | Free Spin Bonus | Stacked Wilds | Expanding Wilds | Scatter Symbol | No. of Jackpot Tiers || NOT APPLICABLE || King Arthur | Game Type || Vegas Slots Play King Arthur on these sites Up to 200 Cash Spins Casino Cruise Casino 100% Up to £1000 + 200 Free spins 100% Up to $/€/£150 Up to €300 + 30 Wager-Free Spins* Dream Vegas Casino 200% up to £2500 + 50 spins 100% Up to £100 +120 Bonus Spins Fun Casino Casino 100% up to £100 plus 10% cashback 100% Up to £10 + 10 Free Spins 100% Up to 1300 + £5 Casino Bonus 100% Up to €100 + 25 Free Spins 100% Up to £100 Mr Green Casino Casino 100% Up to £100 Super Lenny Casino 100% Up to £50 + 50 Free Spins 100% up to €/$200 We’re back in the days of King Arthur and when he went shopping all he wanted was a Round Table. This is another great slot from Ash Gaming and it gives its players a fantastic chance to win some big cash prizes. This slot has a classy look to it and features all your favourite characters from the legend that’s been told so many times. That means Guinevere, Xcalibur and Merlin all make an appearance here. It may not be as funny as Monty Python and the Holy Grail but with the Camelot Bonus Quest Game, you have the chance to win a massive cash prize and won’t even need Sat Nav to win it. This slot has a very classy look to it with King Arthur standing to the left of the reels and Queen Guinevere on the right. The reels are inside a golden frame to add to the regal look. This release has five reels and on its three rows of symbols there are 20 pay-lines. That’s not a fixed amount so you need to decide just how many to activate. The minimum cost of playing all of them is €0.20 and the maximum you can spend on one spin is €200. Setting that stake is easy to do with all the relevant control buttons underneath the reels. Make sure you set a stake that is an amount you can afford to lose. Symbols in the game include The Holy Grail (now Monty Python know where it is), the Lady I the Lake, a goblet and the ten to Ace playing card symbols. There are two wilds in this slot, King Arthur, and Merlin. Both, whether wizards or not, can substitute for other symbols and help get you some winning combinations. They can’t sub for the sword in the stone scatter symbol or the Castle of Camelot symbol which activates the rather interesting Quest game. If it’s King Arthur that gets you a winning combination that win is doubled. There are some good wins to be had in this game with five of the King Arthur wild symbols on a pay-line paying 10000x your line bet. Just four get you a 2000x multiplier. Other notable wins include 300x for five goblets. To get into the Camelot Bonus Quest Game you need to get at least three of the Camelot bonus symbols on an active pay-line. You then find yourself transferred to a second screen where there’s a map with a series of paths leading to Camelot. Your task is to guide King Arthur to Camelot, I’d have thought he knew the way to be honest. Basically, this is similar to a gamble feature with a trail rather than playing cards. You can gamble all your win or half of it. You decide whether to go left or right, in the end you get to Camelot and where you arrive determines the multiplier you get. It can be up to 4x but if you land on 0x then that’s the end of the game and you lose your cash, so be careful with this. If you can get at least three of the scatter symbols anywhere on the reels, this triggers the Sword in the Stone Bonus feature. There’s another job for Arthur as he has to try to get the sword free from the stone. If he can do that, then you win an additional multiplier and that can get as high as 32x if you can get that sword out of the stone five times. There’s a decent bit going on in this game so make sure you access the paytable. That gives you full details of how to play the game. Also, have some free games, they will be helpful as it’ll show you just how the bonus features, which are a little bit different here, play out. If the mere mention of the Holy Grail gets you thinking of Monty Python, why not try Ash Gaming’s ‘Life of Brian’ slot which also has a few rather unorthodox bonus games. Microgaming also have a King Arthur slot if you want more Arthurian slot play. An interesting game for sure. It has a great look to it and the bonus games aren’t what you’d normally expect to be playing. Definitely worth a go but be careful with that Camelot Bonus Quest.
OPCFW_CODE
there is two options in the Connect beta that are not working properly as I could see in my installation: - I have assigned an interval of 5 minutes to execute the cron job, but this cron job is generated to be executed every 15 minutes (the default option). https://imgur.com/eZb8Czu I have checked with WP-CONTROL, and I have changed this event to be executed every 5 minutes, but I think you need to check this for the rest of users who do not know what is WP-CONTROL. BTW, I’m doing the cron jobs inside WordPress, because I have no option in my hosting server to create cron jobs, I mean I have no cPanel or Plesk to do that. - https://imgur.com/CzgATKA As you can see in the image, I have selected one of the services I offer to be the default service that must be applied when I create an appointment in Google Calendar. Said this, when I create an appointment in Google Calendar it is not created for that service, but for the one I created first, more expensive, and that I think should be the first one in the database. So, the option is not working, as GC creates the even in the that other service. The only one to avoid this is to just connect the service I want in the advanced tab, but that makes obviously to lose the sync for the other services with Google Calendar, so this is not a solution. Could you check this and let me know if it’s me of those options are not really working. Any news regarding this Nikola? Again three days waiting for an answer and no news from you! Please, let me know if you checked what I told you here, and offer me a solution. As I told you I can create an admin link to the backend of site to see into it if you prefer. Hi José, regarding Cron if you are using Cron that running under WordPress then that cron executes when someone visit your site. For example no one visit your site for 2 hours then cron will not be executed during that period resulting no sync from Google side even if you have set it to 5 minutes or 15. Running that without Cron from hosting makes things unpredictable. In best scenario you will have visits in 5 minutes interval but who will know that so you can’t be sure in proper sync. Also cron execution in that case is limited in duration, leaving space for errors due to sync process being stoped by hosting. Default time for PHP execution on most hosting is 30 seconds. Regarding second issue you need to have two google calendar if you have two services. By doing like that it will sync those Google events under default service if you don’t have mappings. My suggestion would be to create separate calendar for each service and map them inside advance sync options. 1) In any case, the thing was to let you know that although you can select in the option the interval, this interval is not working, and the cron job is generated to be executed every 15 minutes, that was all! Thank you for the explanation about cron jobs in wordpress anyway. 2) It’s a good idea. In any case, by the moment I can not do that test, because I’m still waiting for you to solve the problem with overlapping events when you have more than one service applied to the same worker. But I will check to do it that way when you solve the overlapping of events. Hi Nikola, I do not understand completely what do you mean as “Create two google calendars”. Do you mean to have two different emails, and one google calendar for each of them. Because I hace created three calendar in my case, one for each service, and it does not work. Well, I can create events in GC that are populated to EA, but only in the calendar that I have been using until now. Am I doing something wrong?
OPCFW_CODE
I implemented [0.008, 0.08] Hz band-pass filtering with both the FSL wrapper (with sigma = 1 / (2 * TR * cutoff_freq) as indicated here) and adapting for Python 3 the nipype code shown in this example (function bandpass_filter), using the code below. As can be seen in the image the results are very different (with what seems something like a factor 2 too much somewhere). Would anyone have suggestions why? (I double-checked that running the FSL command line gave the same results as the FSL wrapper below) Update 1: I also added AFNI wrapper output which is very similar to nipype custom code, and FSL calculations without factor 2, and with factor 4 instead of 2. Update 2: I added at the very end a plot of the power spectrum of the time series, and it seems only NiPype custom function does what is intended. HP_freq = 0.008 LP_freq = 0.08 ### FSL from nipype.interfaces.fsl import TemporalFilter TF = TemporalFilter(in_file=in_file, out_file=out_file, highpass_sigma = 1 / (2 * TR * HP_freq), lowpass_sigma = 1 / (2 * TR * LP_freq)) TF.run() ### Nipype custom example import os import numpy as np import nibabel as nb sampling_rate = 1./TR img = nb.load(in_file) timepoints = img.shape[-1] F = np.zeros((timepoints)) lowidx = timepoints // 2 + 1 # "/" replaced by "//" if LP_freq > 0: lowidx = int(np.round(LP_freq / sampling_rate * timepoints)) # "np.round(..." replaced by "int(np.round(..." highidx = 0 if HP_freq > 0: highidx = int(np.round(HP_freq / sampling_rate * timepoints)) # same F[highidx:lowidx] = 1 F = ((F + F[::-1]) > 0).astype(int) data = img.get_data() if np.all(F == 1): filtered_data = data else: filtered_data = np.real(np.fft.ifftn(np.fft.fftn(data) * F)) img_out = nb.Nifti1Image(filtered_data, img.get_affine(), img.get_header()) img_out.to_filename(out_file) ### AFNI from nipype.interfaces import afni bandpass = afni.Bandpass(in_file=in_file, highpass=0.008, lowpass=0.08, despike=False, no_detrend=True, notrans=True, tr=2.0, out_file=out_file) ### FSL without factor 2 in formula ### FSL with factor 4 instead of factor 2 in formula
OPCFW_CODE
using namespace std; #include <iostream> #include <fstream> #include <cstdlib> #include "Vuelo.h" #include "Reservacion.h" void cargarDatosReservacion(Reservacion listaReservacion[], int &reservaciones) { ifstream fileReservaciones; string id, nombre, numVuelo; fileReservaciones.open("Pasajeros.txt"); reservaciones = 0; while (fileReservaciones >> id >> nombre >> numVuelo) { listaReservacion[reservaciones].setIdPasajero(id); listaReservacion[reservaciones].setNombrePasajero(nombre); listaReservacion[reservaciones].setNumeroDeVuelo(numVuelo); reservaciones++; } } void mostrarReservaciones(Reservacion listaReservacion[], Vuelo listaDeVuelos[], int reservaciones, int cantDeVuelos) { for(int i = 0; i < cantDeVuelos; i++) { cout << "Pasajero con numero de vuelo " << listaDeVuelos[i].getNumeroDeVuelo() << endl; for(int cont = 0; cont < reservaciones; cont++) { if(listaReservacion[cont].getNumeroDeVuelo() == listaDeVuelos[i].getNumeroDeVuelo()) { cout << listaReservacion[cont].getIdPasajero() << " "; cout << listaReservacion[cont].getNombrePasajero() << " "; cout << listaReservacion[cont].getNumeroDeVuelo() << endl; } } cout << "-------------------------------------" << endl; } } void realizarReservacion(Reservacion listaReservacion[], int &reservaciones) { string id, nombre, numVuelo; string respuestaLoop = "si"; while (respuestaLoop == "si") { cout << "Ingrese el numero de vuelo del viaje que quiera reservar" << endl; cin >> numVuelo; cout << "Ingrese el nombre del pasajero (Los espacions se ingresan con '_')" << endl; cin >> nombre; cout << "Ingrese el ID de la persona" << endl; cin >> id; listaReservacion[reservaciones].setIdPasajero(id); listaReservacion[reservaciones].setNombrePasajero(nombre); listaReservacion[reservaciones].setNumeroDeVuelo(numVuelo); reservaciones++; cout << "Desea hacer otra reservacion?" << endl; cin >> respuestaLoop; } } void cargaDatosVuelos(Vuelo listaDeVuelos[], Reservacion listaReservacion[], int &cantVuelos) { ifstream fileVuelos; string numVuelo, origen, destino; int horaSalida, minSalida, horaDuracion, minDuracion; fileVuelos.open("VuelosDelDia.txt"); cantVuelos = 0; while (fileVuelos >> numVuelo >> horaSalida >> minSalida >> origen >> destino >> horaDuracion >> minDuracion) { Tiempo t1, t2; t1.setHora(horaSalida); t1.setMinuto(minSalida); t2.setHora(horaDuracion); t2.setMinuto(minDuracion); listaDeVuelos[cantVuelos].setNumeroDeVuelo(numVuelo); listaDeVuelos[cantVuelos].setTiempo1(t1); listaDeVuelos[cantVuelos].setCiudadDeOrigen(origen); listaDeVuelos[cantVuelos].setCiudadDeDestino(destino); listaDeVuelos[cantVuelos].setTiempo2(t2); cantVuelos++; } } void mostrarVuelos(Vuelo listaDeVuelos[], int &cantVuelos) { Tiempo tTemporal; cout << "----------------------------------------" << endl; for(int cont = 0; cont < cantVuelos; cont++) { cout << "| " << listaDeVuelos[cont].getNumeroDeVuelo() << " | "; tTemporal = listaDeVuelos[cont].getTiempo1(); cout << tTemporal.getHora() << ":" << tTemporal.getMinuto() << " | "; cout << listaDeVuelos[cont].getCiudadDeOrigen() << " | "; cout << listaDeVuelos[cont].getCiudadDeDestino() << " | "; tTemporal = listaDeVuelos[cont].getTiempo2(); cout << tTemporal.getHora() << ":" << tTemporal.getMinuto() << " | " << endl; } cout << "----------------------------------------" << endl; } void agregarVuelo(Vuelo listaDeVuelos[], int &cantVuelos) { string numVuelo, origen, destino; int horaSalida, minSalida, horaDuracion, minDuracion; Tiempo t1, t2; cout << "Ingrese el numero de vuelo" << endl; cin >> numVuelo; cout << "Ingrese el horario de salida (Ingrese la hora y separe con un espacio para ingresar los minutos)" << endl; cin >> horaSalida >> minSalida; cout << "Ingrese las siglas del lugar de origen (MAYUS)" << endl; cin >> origen; cout << "Ingrese las siglas del lugar de destino (MAYUS)" << endl; cin >> destino; cout << "Ingrese tiempo de duracion del vuelo (Ingrese las horas y separe con un espacio para ingresar los minutos)" << endl; cin >> horaDuracion >> minDuracion; t1.setHora(horaSalida); t1.setMinuto(minSalida); t2.setHora(horaDuracion); t2.setMinuto(minDuracion); listaDeVuelos[cantVuelos].setNumeroDeVuelo(numVuelo); listaDeVuelos[cantVuelos].setTiempo1(t1); listaDeVuelos[cantVuelos].setCiudadDeOrigen(origen); listaDeVuelos[cantVuelos].setCiudadDeDestino(destino); listaDeVuelos[cantVuelos].setTiempo2(t2); cantVuelos++; } void mostrarAeropuertos() { ifstream filePuertos; string nombrePuerto, IATA; filePuertos.open("Aeropuertos.txt"); while (filePuertos >> IATA >> nombrePuerto) { cout << IATA << " " << nombrePuerto << endl; } filePuertos.close(); } int main() { Reservacion listaReservacion[20]; Vuelo listaDeVuelos[10]; int cantVuelos, reservaciones; char respuesta; string menuLoop = "si"; string reservResp; cargaDatosVuelos(listaDeVuelos, listaReservacion, cantVuelos); cargarDatosReservacion(listaReservacion, reservaciones); cout << "Bienvenido" << endl; cout << endl; while (menuLoop == "si") { cout << "Seleccione una de las siguientes opciones presionando la tecla correspondiente" << endl; cout << "a) Mostrar lista de aeropuertos | b) Mostrar vuelos disponibles | c) Realizar una reservacion | d) Mostrar lista de pasajeros con reservacion | e) Agregar vuelo al sistema | x) Salir" << endl; cout << "Vuelos disponibles: " << cantVuelos << endl; cin >> respuesta; switch (respuesta) { case 'a': mostrarAeropuertos(); break; case 'b': mostrarVuelos(listaDeVuelos, cantVuelos); cout << "Desea hacer una reservacion?" << endl; cin >> reservResp; if(reservResp == "si") { realizarReservacion(listaReservacion, reservaciones); } break; case 'c': realizarReservacion(listaReservacion, reservaciones); break; case 'd': mostrarReservaciones(listaReservacion, listaDeVuelos, reservaciones, cantVuelos); break; case 'e': agregarVuelo(listaDeVuelos, cantVuelos); break; case 'x': cout << "Vuelva pronto" << endl; exit(0); break; } cout << "Deasea hacer otra accion?" << endl; cin >> menuLoop; } return 0; }
STACK_EDU
How to edit, create and remove config, how to add scripts, how script are loaded (alphabetically), link to place where you can learn lua (or some tutorial), log functions (print, info, warn, error), main variables (now, time, player, storage), how to get item id, link to bot files and functions (https://github.com/OTCv8/otclientv8/tree/master/modules/game_bot) To create a config navigate to %appdata%\OTClientV8\otclientv8\bot and create a folder with desired config name. Every folder in the bot directory is treated as a config. Following images shows 2 configs. Default config and a config called config1 created by the user. Inside of a config will be your .lua files containing the scripts you want to run. The client will run the files in alphabetical order. See next picture for a visual demonstration on which while runs first. You can also follow the quick in-client tutorial on how to setup your configs if you feel confident enough to tackle it on your own. Picture shown below is the in-client tutorial. Getting started with scripting Before diving into the unknown world of otclientv8 scripting it's highly recommended to learn Lua and the basic principles of programming. Now that you feel more comfortable with the Lua language and how scripting / programming works lets dive right into it. First off lets cover the built in debugging functions we can use in otclientv8. info(string) -- Displays white text in the macro window warn(string) -- Displays yellow text in the macro window error(string) -- Displays red text in the macro window print(string) -- Prints to the built in terminal (ctrl+t to access terminal) - info: Great way to print information you're collecting to know exactly what you're working with - warn: Great to use when you meet a condition that could possibly cause an issue - error: Great to use when you meet a condition that WILL cause an error - print: A neat way to constantly log a lot of values that you want to monitor (terminal can display way more text than macro window) Now that we're familiar with the logging features lets take a look at some built in variables we can access. now -- Time in milliseconds from when the application started time -- Time in milliseconds from when the application started player -- Local player object storage -- A way to store variables even when client is restarted - now/time: Very useful for scripts that are dependant on time example: mwall timers - player: Contains a lot of information about the player such as position, health, outfit etc - storage: Great to use when you want to store values even when you close the client Lets take a look at some examples on how to use these variables and logging functions. -- Example how to use certain warning depending on a condition macro(1000, "HP Tracker", function() if hppercent() > 90 then info("Your hp is in a good condition") else if and hppercent() > 50 then warn("Your hp is getting low, please heal") else error("Your hp is ridiculously low, HEAL NOW!") end end) -- Information about macro can be found here (http://bot.otclient.ovh/books/tutorials/page/macros) -- Example how to log player position in terminal macro(1000, "Log Player Pos", function() print("X: " .. posx() .. " Y: " .. posy() .. " Z: " .. posz()) end) -- Warn if player has skull macro(250, "Skull Tracker", function() if player:getSkull() ~= 0 then warn("You have a skull!!!") end end) As you can see in the examples above there's quite a few use cases above where logging information can be useful. In the scripts above I made sure to use built in functions to not confuse you with a lot of extra code. - posx(): Same as player:getPosition().x (returns the x position of player) - posy(): Same as player:getPosition().y (returns the y position of player) - posz(): Same as player:getPosition().z (returns the z position of player) - hppercent(): Same as player:getHealthPercent() (returns the health percentage of player) I also made use of a player function, more specifically the getSkull() function which is accessed through a creature objects. It returns the ID of the skull (0 if there's no skull). Now that we know how to make use of the logging functions and some built in functions lets make a script that has some real value in terms of gameplay. Try to make sense of the following code before reading through the walkthrough guide below. local healSpellWeak = "Exura" local healSpellMedium = "Exura Gran" local healSpellStrong = "Exura Vita" macro(250, "Auto Heal", function() if hppercent() > 90 and hppercent() < 99 then say(healSpellWeak) else if hppercent() > 50 then say(healSpellMedium) else say(healSpellStrong) end ene) If you can't make sense of the code don't worry. Lets walk it through step by step. First we assign 3 variables to contain a string with the spell name of our choosing. A weaker healing spell, a medium and then a strong one depending on how critical our health is. We then create a macro, this macro will run every 250 milliseconds and the callback for the macro is our function that contains the logic for when we should heal ourselves. The functions works as following, if players hp is between 90 and 99 percent we will use the weaker healing spell however if player hp drops below 90 percent but is still above 50 percent we will use the medium strong healing spell but if all of those if statements fail we will use the strongest spell possible to make sure we don't die. Note that we're using a new function called say(), more info about that below. - say(string): Accepts a string as parameter and will make your character say that string in game. Information about macros can be found here
OPCFW_CODE
Extension/Plugin Support Proposal: Extension/Plugin Support in zBench Overview We aim to introduce a streamlined plugin support mechanism into zBench, facilitating easy creation of output format extensions like zbench-csv or zbench-json. This enhancement is designed to lower the barrier to extending zBench functionalities, allowing for a broader range of output formats tailored to different use cases and preferences. Rationale The current output capabilities of zBench, while comprehensive, are fixed and might not suit all users' needs. By enabling plugin support, we open up the door for customizations. Proposal The proposal outlines a simple yet flexible plugin architecture that allows plugins to be easily integrated with minimal changes to zBench's core codebase. Plugin Interface A single-function plugin interface that takes benchmark results and outputs them in the desired format. This function can be specified directly in the Config struct, allowing for high flexibility with minimal overhead. Config Integration Extend the Config struct to include an optional plugin function. This function, if provided, will be called with the benchmark results, and it is responsible for processing and outputting the data accordingly. Implementation Example const std = @import("std"); pub const PluginFn = fn (results: BenchmarkResults) void; pub const Config = struct { ... output: ?PluginFn = null, }; Usage const zbench = @import("zbench"); const jsonPlugin = @import("zbench-json.zig"); var config = zbench.Config{ ... output: jsonPlugin.outputResults, }; zbench.runBenchmark(config); As I said in https://github.com/hendriknielaender/zBench/pull/61#discussion_r1527063986, for outputting results I think the most important thing is that a user who has specific needs can make the output function they want as easily as possible. To me that means making sure all the data they could want (ultimately the timing measurements taken) are available, and to make things easy a couple of representative functions that can be copied and adapted, like a pretty format for humans like there is already, and a simple JSON and CSV or similar format wouldn't hurt. It's often most helpful to get out of the way and let a user write the custom formatting code they want themselves rather than trying to provide a plugin for every need, and a plugin type system usually means extra hoops to jump through, whereas just making a custom function means you only need to pass it a Result value and a Writer - done. Leveraging function pointers indeed introduces limitations when we consider runtime decisions, such as toggling between stdout and file output or specifying file paths dynamically. Instead of relying on function pointers, we could consider a design where plugins or extensions can define their output mechanisms more declaratively. This would allow for greater flexibility. Maybe something like this: pub const Plugin = struct { /// Sets the output target for the plugin. Could be a file path, stdout, etc. /// This method allows for runtime flexibility in determining the output destination. setOutputTarget: fn (self: *Plugin, target: OutputTarget) void, /// Processes the benchmark results. This method is where the plugin does its primary work, /// such as formatting results as CSV or JSON. processResults: fn (self: *Plugin, results: BenchmarkResults) void, }; pub const OutputTarget = enum { stdout, file(path: []const u8), }; What's the benefit over calling a function with a BenchmarkResults? The output capabilities of that are unlimited because a BenchmarkResults contains all the relevant information which can be formatted in whatever way the user wants. Given that's already there, I don't see a need for this configuration parameter, it's only going to be unnecessary complexity. Another example is sending the results to a network socket, or a buffer, the point isn't to add one more option to handle those cases, it's that the writer interface is intentionally open ended and it's perfectly well supported by a function with a writer: anytype parameter. True this is a valid point regarding the inherent flexibility in using a function that accepts BenchmarkResults and a writer of anytype. The proposal for a plugin interface aims not to restrict this flexibility but to complement it by offering a structured way to manage multiple output formats and destinations. I think the plugin approach can add value for these points: Standardization: By defining a common interface for plugins, we establish a standard method for extending zBench. Ease of Use: For users not familiar with Zig's generics or those looking for quick, out-of-the-box solutions, plugins provide a simple and discoverable way to extend functionality. It's about offering choice and convenience. Encapsulation and Composition: Plugins allow for encapsulating functionality related to output formatting and destination management. Lifecycle Management: A plugin system can manage the lifecycle of extensions, including initialization, configuration, execution, and teardown. This is particularly useful for resources that require setup or cleanup, like network connections or file handles. Fair enough, some prototype code to play with might help to clarify how it would work.
GITHUB_ARCHIVE
It may only be Friday, but I am ready for the weekend. Several things are ready to be recorded and some other things need attention. One of the things that the weekends afford me is the opportunity to really deeply focus on things. Finding the time to really deeply focus on something is what helps push my learning efforts forward. Every single day I approach the opportunity to learn something with vigor and enthusiasm. That is really one of the keys to being able to enjoy the hunt for new things. We have such an opportunity to pick up and learn new things. It is hard not to take that for granted just how open and easy access to vast amounts of things has become with the advent of the internet and how people are electing to share things. Keep in mind that historically guilds and other organizations aligned to professions that kept information and shared it within an apprentice system. Now you can just boot up YouTube and get a quick tutorial on most anything. I’m going to call it. Our access to information is probably at a peak right now given a few things that will inevitably happen. Even the best knowledge graphs that were scaling up as all the internet content blossomed will struggle with separating current content from out of date information. Things change and even tutorial videos drop out of relevance as change occurs. Structuring your knowledge graph to be able to handle those changes in usefulness is difficult at best to sustain over time. Designing a recursive relevance function against stored graph information requires a baseline and method to establish that base. Beyond the problem of figuring out what is really the freshset information to share with people, a very real problem exists from the vast overcrowding that is occurring within the knowledge graph. Large language models are creating a scenario where more content than previously generated by humanity can be generated by an endless running prompt. Separating the real from the endless stream of synthetic content will be nearly impossible. This creates a scenario where the knowledge graph may want to be designed to favor previously created content that existed before the great flooding of information occurred online. However, the aforementioned problem of things going stale is going to occur as favoring previous content only creates freshness problems. That means that right now we may have the best possible access to information. We may have hit peak information and nobody is really appreciating it. At some point, curated knowledge graphs inside platforms and metaverse style realities might be the gold standard for information access. Something is probably going to change here in the next few years and it will be interesting to see how that happens. A lot of people are investing in a VR headset based version of that access and interaction. I don’t really want to login to something wholesale to experience an alternate reality. Yesterday, I walked the dog and read part of Isaac Asimov’s science fiction book about the Second Foundation (1953). That experience was perfectly satisfying. I’ll admit that during working hours I certainly sit in front of a computer screen which is less immersive, but not wholly different from being committed to a VR experience. For the most part within that working experience the world in front of me shrinks down to the content contained within the screen.
OPCFW_CODE
Join Mike Benkovich for an in-depth discussion in this video An MVC frame of mind, part of ASP.NET MVC 5 Essential Training (2018). - [Instructor] Web development is different than Windows or Rich Client development in that we've got a request that comes from the client machine, gets sent to the website, and then it gets processed. The basic flow of things is the user enters the URL in their own browser, it gets sent across the wire to the website, which then takes it, processes it, generates a response, and then goes out and sends back a response. This is a process that is pretty well-known, and over time, there's been a number of different types of frameworks and technologies that have come along to simplify that process, taking us from the old days when it was creating a listener to handle that process with a lot of spaghetti code to Active Server Pages, to Web Forms, PHP, where each of these different types of approaches allowed us to create a logical model around, how do we get the request and generate that response and send it back? Some of these patterns include things like the Model-View-Presenter, which is common in a Rich Client, like a Windows application, where the pattern is to have an event-based type of an architecture where I drag a button onto a form, double-click on it, add logic of what it does, and then go to the next page and add another button, and you end up creating code all over the place. The Model-View-Adapter is another model, MVVM, which is common in mobile applications, as well as MVC, which is a pretty common pattern for building web applications, and the reason why is there's a lot of great advantages to it. What is Model-View-Controller, MVC? It is really taking the process of focusing and separating out the entire process of taking that request and looking at it from three areas: the Controller, which gets the request, routes it to the right spot, calls the right logic, the View, which has the presentation of it, and if we have data, then we've got a Model that we can use for making sure that we're working with the right business logic on how we want to handle the processing with that. So, what are some of the benefits that this gives us? You might be thinking, well, why do I want to go out and learn a whole new way of doing this? There's quite a few benefits, including separations of concerns, which allow us to break the problem down into separate areas, if we're collaborating with other people, to have a team that is able to look at the presentation layer, to look at the data layer, to look at how we're doing our routing. The separation of concerns leads into how we can also enhance the testability of the application by breaking it into littler pieces that can be tested, and then we can use that to confirm that our application does what it's supposed to do. We get real good control over the HTML that is being rendered, and it's a nice way to get down to that low level. So, if this is the first time you're coming into looking at MVC, you might be thinking, this is really quite a bit different than what I've done before, especially if you're coming from something like Web Forms where I've got code in different places. I don't have to worry about creating controllers for logic. If I want to add pages, I just add a folder. With MVC, I have to be thinking ahead of time on how I want to route things around. There's a lot of concepts that you have to learn, things like the model binding and validation, the filters that I can put onto actions. I've got scaffolding. So, how do I go out and create new pages? Well, there's some things in MVC that it does that is really nice, that you couldn't do in Web Forms. There's also the dependency injection, which is a concept where we want to make it easier for us to be able to go out and test and break out the logic so that we can have a way to make sure that our code does what we want it to do. Some of the more advanced concepts that we'll talk about in this course is talking about things like the HTTP programming model, how we get closer to the requests and the responses, and how we look at the HTTP status codes that come back. We think about the REST pattern, with actions dispatching based on verbs. There is also the opportunity to work with concent negotiation, so that when a request comes in, we'll return back a response that maps out to what the client wants. It might be JSON, it might be data, it might be XML as opposed to just HTML. Configuration based on code, which is where we want to allow us to configure how our application works. There's bundling and minification, which are efficiencies that we can get in how we're rendering data back and forth. MVC is not a completely new way of doing everything. A lot of these concepts also exist inside of Web Forms, and as the framework has evolved, these are coming in there so that you can use and leverage these different types of tools in a given app. But it's not meant to be something that is just so complex that you don't have time to learn it. It's meant to simplify and make it easier for us to go out and build great software. MVC is a great way to organize your code, to get into the markup and the control flow, and it is fairly easy to learn if you've got the basic background in how .NET works and are familiar with the web patterns. So, with that, let's dive into building our first app. - Creating a new ASP.NET MVC 5 project - Building custom routes - Creating custom layouts - Adding a model to a view - Deploying to Azure and Amazon Web Services - Configuring authentication and authorization - Unit testing code
OPCFW_CODE
Allocating memory for char* attribute in C struct Suppose that we have the following struct definition in a C file: typedef struct { char *name; int id; int last_cpu; } thread_t; (this is for a simulation I'm writing to play with different scheduling algorithms for my OS class). Every time I create a new thread_t struct, how do I deal with the fact that one of the variables is a char*? For example, I have the following method in my code: thread_t **get_starting_thread_list() { thread_t **threads = (thread_t **) malloc(MAX_NUM_THREADS * sizeof(thread *)); int next_id = 0; for (int th_num = 0; th_num < MAX_NUM_THREADS; th_num++) { thread_t *new_thread = (thread_t *) malloc(sizeof(thread_t)); new_thread->name = "New thread"; new_thread->id = ++next_id; new_thread->last_cpu = 0; threads[th_num] = new_thread; } return threads; } Could I potentially get a Segmentation Fault if the name field of new_thread is ever "too big"? That is, should I be allocating additional memory for the thread name before assigning it? I've tried running this and it seems to be behaving fine, but I'm afraid that it could potentially break later on. Also Dont cast the value of malloc in C http://stackoverflow.com/a/605858/5339899 and you dont even need to allocate memory for threads just for new_thread Why don't I need to allocate memory for threads? Since 'threads is what this function is returning, couldn't I get a seg fault if I don't allocate memory for it? Note: A nice simplification to thread_t **threads = (thread_t **) malloc(MAX_NUM_THREADS * sizeof(thread *)); is thread_t **threads = malloc(sizeof *threads * MAX_NUM_THREADS);. Easier to maintain and less likely to code wrong. when calling malloc() do not cast the returned value, because the returned value is a void * so can be assigned to any pointer. Always check (!=NULL) the returned value to assure the operation was successful. Could I potentially get a Segmentation Fault if the name field of new_thread is ever "too big"? No, you cannot get a segfault there, no matter how long the strin may be, because the memory for string literal is allocated statically, and is never copied. That is, should I be allocating additional memory for the thread name before assigning it? Not unless you plan to make the name changeable. If you would like to keep the name pointing to the string literal, your code is perfectly fine. Here is what to do if you want to use non-literal data for the names: char buf[32]; sprintf(buf, "New thread %d", next_id); new_thread->name = malloc(strlen(buf+1)); strcpy(new_thread->name, buf); Now you need to call free(threads[i]->name) before de-allocating threads I've tried running this and it seems to be behaving fine, but I'm afraid that it could potentially break later on. Your code is fine. You can always use a memory profiler, such as valgrind, to check for invalid access. What do you mean that "the memory for the string literal is allocated statically"? @Ryan The compiler places the characters in a special segment in memory where it keeps string literals. Then this segment is loaded in memory at run time, and your program is given a pointer to it. In your snippet you are pointing name at a literal string. If you don't intend to change it that will work fine forever. If you wanted to do something like ptr->name[0] = 'X'; You're now trying to modify a fixed string, which I think is undefined behavior (watch the comments for a language lawyer to confirm or deny ;-). In your code, the literal string "New thread" is allocated statically, not dynamically. (Meaning that this text exists as soon as your application starts, rather than being dynamically allocated from the heap using functions like malloc().) So this will never be too big or cause a segment fault. That string is in static memory and now your thread_t structures will point to that memory. However, in this case the value will be the same for each structure. So I doubt this is what your code really looks like. The struct defines char *name; and no matter what string it is pointed at, the size of the pointer will not change. So the answer to this question: Could I potentially get a Segmentation Fault if the name field of new_thread is ever "too big"? is NO, the size of the string makes no difference. However, if you define a literal string, then that string cannot be changed. trying to change the contents of a literal string will cause a seg fault event,.
STACK_EXCHANGE
Table of Contents LOCATION: Curriculum > Settings > Course Template The course template defines the basic set of information that constitutes a course. It is often determined by making sure that the fields map with an institution's student information system, in addition to any fields that may be handled outside of the SIS. It uses the same drag and drop form builder that is described in Setting up Curriculum Forms. An example course template could include: Pre-Built Course Template Fields Delivered prebuilt fields can't be adjusted to have the "single" or "multiple" value changed. In these cases, we advise that the client create custom fields or arrange for an enhancement. Primary and Graded Component Primary Component and Graded Component are fields built for PeopleSoft customers. These are stored at the course level, rather than within the pre-built Components card. The options shown on the drop-downs for these fields are automatically fetched from the available components on a given course (i.e. the Component Name fields for that course, as defined in the pre-built Components card). If a component doesn’t have a name, it will not show up on the drop-down list. The drop-down is updated in real time (i.e. if a user edits the course and adds a Component Name, this new name will immediately show up as an option in the Primary/Graded Component drop-downs). If “Optional Component = Yes”, then the component is not displayed as an option in the “Primary Component” and “Graded Component” fields. This change prevents Course POST errors and prevents end users from accidentally selecting “optional components” as “primary components” and “graded components”. This behavior enhancement is aligned with how the fields operate in PeopleSoft. In the below example, we have set up two components. One has “Optional Component = Yes” and the other has “Optional Component = No”. Only the Lecture component is available for selection as the primary component or the graded component. There is an input labeled "course aliases." This input is only allowed once in the Course Template and in the Form. Aliases are when Courses can have multiple different Course Codes (as a reminder, what is called the Course Code is usually just an institution's way of saying Subject Code + Course Number, so MATH101 is a course code). Some courses might have multiple course codes. An example of which is BIOCHEM101. This course is a Biology Chem course, offered in the Biology department. However, chemistry majors call this course CHEM101, so it can stay consistent with the fact that Chemistry majors must take CHEM courses. In other words, these students take the same class, but for reporting purposes, they have different names. For schools that don't utilize our requisite builder, we have implemented the ability to hide the "dependencies" card. This can be set by navigating to the template, clicking the eye icon in the "dependencies" card, then modifying visibility settings. Template Fields (In Depth) For the CIP / Hegis code fields: Options for these fields are loaded in from the integration. The value is stored as the actual code of the data: "CIP Code" or "Hegis Code" (e.g. 05.0206). In the Course/Program page, we will display the "code + description" for users to make it more readable. This only occurs in the Course/Program page, and any reports of curriculum data will only show the code. Course Attributes Field The course attributes pre-built field is aware of effective dating. The field refetches attributes when users define effective dating in a proposal (terms/dates). As such, when a user is creating a proposal, the list of available attributes selectable in the course attributes field will change based on their input in the effective date fields. If you wish to add course attributes to the template BUT do not have effective-dated attributes AND/OR have not integrated these attribute details in your instance (Banner, homegrown SIS, etc.), you must use a custom question with type = select instead of the pre-built attributes question. Using the pre-built question in these scenarios could result in data loss. Notes on Custom Fields Custom fields can be added to the template by dragging and dropping "custom fields" from the question bank. All custom questions added to the course and program template are assumed to be "Coursedog-only" content, and by default are not integrated unless a specific exception is made in the integration wiring. Required Learning Outcomes Users are able to define a required number of learning outcomes in a proposal. This can be accessed by defining “required # of elements” in question settings. The “use error alert” feature has also been added, which presents validation errors as a large flag at the top of the card. In the below example, because the user has only created one learning outcome, the error is shown alerting the user that they need two learning outcomes.
OPCFW_CODE
With Bitcoin set to fork near the beginning of August, it’s amazing to see the panic slowly spreading across the throngs of Bitcoin users. There are many perspectives on the matter and all of them become heated under scrutiny. Let’s take a breath, shall we? A good friend and technology wizard known as Zodiac from Bitvest.io wrote a beautiful explanation of technical details and terminology. It says: “WHAT THE FORK? BIPS, FORKS, SEGWIT, UASF, AND NYA?“ BIP: Bitcoin Improvement Proposals are the method to implement new features in Bitcoin. They may also be informational documents or describe new processes external to the protocol, e.g.: modifications to the BIP system itself. Each BIP is given its own number to identify which it is. For example, BIP141 is the BIP# for SegWit. Forks: A fork is where a blockchain splits into two pieces (forks) at a certain block. Usually, only one fork lives on, but it is possible for multiple to exist. There are two major types of forks used to upgrade Bitcoin. Hard forks which make the rules more lenient and are backward incompatible requiring all users and nodes to upgrade their software. There are also soft forks, the safer method of forking by making rules stricter, leaving old clients and wallets compatible, but are simply lacking the new features. SegWit: Segregated Witness is a BIP which increases network capacity, security, and scalability through a soft fork. SegWit comes with a lot of new functionalities and fixes besides the block size such as malleability fix which enables the creation of second layer networks like the Lightning Network. UASF: User Activated Soft forks are a way for the economic majority, along with a small percentage of miners to force the majority of miners to accept a protocol upgrade. The goal of a UASF is to soft fork the chain and have all economic activity occur on the upgraded fork, devaluing the coins on the other fork. In theory, this will incentivize miners to mine the more profitable fork, as the only coins which have value are the ones which users accept and trade. NYA/SegWit2x: The New York Agreement proposes an upgrade to the network activating SegWit through a different soft fork bit incompatible with BIP141, then a hard fork within three months to 2MB. This would raise network capacity about four-fold in practice. 2x for SegWit itself and 2x for 2MB (hence the name). While the promise of increasing to twice the capacity of SegWit alone may sound good, this is not without technical issues. Being a hard fork itself makes NYA/SegWit2x a potentially dangerous upgrade path. It is also seen as a potentially undesirable outcome due to the aggressive timeframe that the 2MB hard fork is set to occur in.” Read the whole piece here: Zodiac’s The Forkening The most important thing to consider as the upcoming fork approaches is the storage of your Bitcoin stash. If your coins are managed by someone say in an investment scenario, ask the admins a few questions. Check the list below and find a comfortable position for you and your coins. What’s the plan for my Bitcoin supply before, during and after the fork? Which Chain will my BTC follow if I keep my coins in your care? The answer should be something like “The economic majority and the most profitable or most safe surviving chain will be the choice to follow”. What can I do to protect the value of my coins? There should be an answer. It doesn’t have to be pretty language or a fancy, perfect pitch but there needs to be a clearly defined plan. Depending on the results you get, you may decide to take an alternate route with your Bitcoin. If you feel unsteady or unsure AT ALL about the hands your bitcoin is resting in, you can either withdraw to a self controlled wallet until the dust settles or exchange your Bitcoin into a few different altcoins if you wish, until the fork has taken place. Don’t panic. Just calmly determine what is right for you and your Bitcoin.
OPCFW_CODE
Paul Louth had a great development team at Meddbase, the healthcare software company he founded in 2005. But as the company grew, so did their bug count. That’s expected, up to a point. More code and more features mean more defects. But the defect rate was growing faster than Louth expected. “We were seeing more and more of the same types of bugs,” Louth says. “It was clear that there was an issue, but it wasn’t clear what it was. My instinct told me the problem was complexity.” Louth and company used C#, which had just happened to add support for Language Integrated Query (LINQ), an alternate language for querying data sources ranging from databases to objects within codebases. LINQ introduced ways to apply the programming paradigm known as “functional programming” to C# code, and learning about how LINQ worked led him to learn more about functional programming in general. Functions—chunks of code that can complete a given task again and again within a program, without the need to re-write the code over and over—are a standard part of programming. For example, you could write a function that returns the circumference of a circle when given its diameter, or a function that returns an astrological sign based on someone’s date of birth. But as Cassidy Williams writes in her guide to functional programming for The ReadME Project, functional programming is about more than just writing functions. It’s a paradigm that emphasizes writing programs using “pure functions”: stateless functions that always return the same value when given the same input and produce no “side effects” when returning their output. In other words, pure functions don’t change any existing data or modify other parts of an application’s logic. If our circumference function changed a global variable, for example, it wouldn’t be pure. In functional programming, data is generally treated as immutable. For example, instead of modifying the contents of an array, a program written in a functional style will produce a copy of the array with the new changes made. Unlike object-oriented programming, where data structures and logic are entwined, functional programming emphasizes a separation of data and logic. “When you think about well-structured software, you might think about software that’s easy to write, easy to debug, and where a lot of it is reusable,” writes Williams, head of developer experience at Remote, a company that uses the functional programming language Elixir for their back-end software. “That’s functional programming in a nutshell!” Functional programming can be a boon for growing teams working with large codebases. By writing code with few side effects and immutable data structures, it’s less likely that a programmer working on one part of a codebase will break a feature another programmer is working on. And it can be easier to track down bugs since, as Williams writes, there are fewer places in the code where changes can take place. As Louth delved deeper into the world of functional programming, he realized the approach could help solve some of the problems that large-scale, object-oriented codebases presented, such as those his team was starting to run into at Meddbase. He wasn’t alone. Interest in functional programming has exploded over the past decade as more and more developers and organizations look for ways to tame complexity and build more secure, robust software. “Once functional programming really clicked in my brain, I was like ‘why do it any other way?’” says Williams, who fell in love with functional programming during a college class that used the LISP programming language dialects Scheme and Racket. “Object-oriented programming isn’t bad, it still runs a lot of the world. But a lot of developers become evangelical about functional programming once they start working with it.” Sure, object-oriented and imperative programming remain the dominant paradigms for modern software development, and “pure” functional programming languages like Haskell and Elm are relatively rare in production codebases. But as programming languages expand support for functional programming methods and new frameworks embrace alternative approaches to software development, functional programming is quickly finding its way into a growing number of codebases through a variety of different avenues. “Once functional programming really clicked in my brain, I was like ‘why do it any other way?’” Rise of functional programming in mainstream languages Functional programming’s origins stretch back to the early days of programming languages with the creation of LISP in the late 1950s. But while LISP and its dialects like Common LISP and Scheme retained a devoted following for decades, object-oriented programming eclipsed functional programming in the 1970s and 1980s. By 2010, interest in functional programming came roaring back as developers found themselves working with ever-larger codebases, and teams faced the same problems as Louth and his development team. Scala and Clojure brought functional programming to the Java Virtual Machine, while F# brought the paradigm to .NET. Twitter migrated much of their code to Scala, and Facebook deployed venerable functional languages like Haskell and Erlang to solve specific problems. In 2014, Apple introduced Swift, a new, multi-paradigm language that includes robust support for functional programming. The launch proved that functional programming support had become a must-have feature of new programming languages, just as support for object-oriented programming already was. In a 2019 talk titled “Why Isn’t Functional Programming the Norm?” Feldman concluded that the functional style of programming is indeed becoming the norm. Or at least part of the norm. “Something has changed,” he said. “This style is not something people were considering a positive thing in the 90s, and the idea that it is a positive has become sort of mainstream now.” The level and nature of support for functional programming vary by language, but one particular feature has become standard. “The thing that has seemingly ‘won,’ so to speak, is higher-order functions—functions that accept or return another function,” says Tobias Pfeiffer, a back-end engineer at Remote. “This lets you create blocks of functions to elegantly solve problems.” Support for higher-order functions has made it possible for developers to bring functional programming into their codebases without re-architecting existing applications. When Louth and his team decided to shift to a functional programming model, they settled on using F# for new code in existing applications, since it provides robust functional support while maintaining compatibility with the .NET platform. For existing C# code, they employed the language’s own support for functional programming, along with language-ext, Louth’s own open source framework for functional C#. “A full rewrite wasn’t an option,” Louth says. “So when we add new features to existing code, we write it following a functional programming paradigm.” A bigger issue for those mixing paradigms is that you can’t expect the same guarantees you might receive from pure functions if your code includes other programming styles. “If you’re writing code that can have side effects, it’s not functional anymore,” Williams says. “You might be able to rely on parts of that code base. I’ve made various functions that are very modular, so that nothing touches them.” Working with strictly functional programming languages makes it harder to accidentally introduce side effects into your code. “The key thing about writing functional programming in something like C# is that you have to be careful because you can take shortcuts and then you’ve got the exact sort of mess you would have if you weren’t using functional programming at all,” Louth says. “Haskell enforces the functional style, it encourages you to get it right the first time.” That’s why Louth and his team are using Haskell for some of their entirely new projects. But purity is often in the eye of the beholder. “LISP is the original functional programming language and it has mutable data structures,” says Pfeiffer. Even if your language is purely functional and uses immutable data structures, impurity is introduced the moment you include user input. While user input is considered an “impure” element, most programs wouldn’t be very useful without it, so purity only goes so far. Haskell manages user input and other unavoidable “impurities” by keeping them clearly labeled and discrete, but different languages have different boundaries. Expanding the functional mindset Even though functional programming is now available in most major programming languages, developers aren’t necessarily taking advantage of those features. The functional programming approach requires a substantially different way of thinking about programming than imperative or object-oriented programming. “What blew my mind was that you couldn’t do traditional data structures in functional programming,” Williams says. “I didn’t have to make an object for a tree, I could just do it. It feels like it shouldn’t work, but it does.” For example, the recently-introduced React Hooks are a set of functions that help developers manage state and side effects, while Redux—a popular library for managing state often used with React and other front-end libraries—takes a largely functional approach. Meanwhile, Gonzalez sees functional programming becoming more mainstream through still another avenue: domain-specific languages. Being really good at one thing is often an easier route to adoption. For example, the Nix package management system’s expression language. “It’s even more purely functional than Haskell is, in the sense that it is even harder to write with side effects or mutability,” she says. Nix won’t be useful for, say, building a web server. It’s intended for only one purpose: building packages. But because it’s built for that specific task, developers who need a build tool might use it even if they wouldn’t otherwise try a functional programming language. “I think as time goes on you’ll see fewer general-purpose languages and more specialized languages,” she says. “Many of those will be functional.” “React encourages people to think in a more functional way, even if it isn’t purely functional." Building the future of functional programming Like so much else in software development, functional programming’s future largely depends on the open source communities built around functional languages and concepts. For example, one barrier to pure functional programming adoption is simply that there are far more libraries written for object-oriented or imperative programming. From mathematics and scientific computing packages for Python to web frameworks for Node.js, it makes little sense for individuals to ignore all of this existing code that solves common problems just because it’s not written in a functional style. This is something Louth sometimes experiences with Haskell. “It has a fairly mature ecosystem, but it’s pockmarked by abandoned or unprofessional libraries,” he says. To overcome that obstacle, the functional programming community will have to come together to create new libraries that make it easier for developers to choose functional programming over more side effect–prone approaches. From the language-ext library to Redux to Nix, that work is already in progress.
OPCFW_CODE
using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Threading.Tasks; using Microsoft.Extensions.Logging; using PuppeteerSharp; using ScrapeChallenge.Models; namespace ScrapeChallenge.Processor { public class ScrapePureCoUkMenu : IPureCoUkScraper { private readonly ILogger<ScrapePureCoUkMenu> _logger; public ScrapePureCoUkMenu(ILogger<ScrapePureCoUkMenu> logger) { _logger = logger; } [DebuggerDisplay("MenuTitle: {Title} Url: {Url}")] public class MenuAnchor { public string Title { get; set; } public string Url { get; set; } } public class MenuHier { public string Url { get; set; } public string MenuTitle { get; set; } public string MenuDescription { get; set; } public List<MenuSection> MenuSections { get; set; } } [DebuggerDisplay("Title: {Title} Id: {Id}")] public class MenuSection { public string Title { get; set; } public string Id { get; set; } public List<string> DishUrls { get; set; } } public async Task<List<Dish>> ScrapeMenuAt(string url) { //var options = new LaunchOptions { Headless = true }; //await new BrowserFetcher().DownloadAsync(BrowserFetcher.DefaultRevision); // //var connectOptions = new ConnectOptions //{ // BrowserWSEndpoint = "wss://chrome.browserless.io" //}; //using (var browser = await Puppeteer.ConnectAsync(connectOptions)) var dishList = new List<Dish>(); var menuHierList = new List<MenuHier>(); var options = new LaunchOptions { Headless = true }; await new BrowserFetcher().DownloadAsync(BrowserFetcher.DefaultRevision); using (var browser = await Puppeteer.LaunchAsync(options)) { using (Page page = await browser.NewPageAsync()) { await page.GoToAsync(url); var jsSelectAllMenuAnchors = @"Array.from(document.querySelectorAll('.submenu a')).map(t=> { return { 'Title':t.innerText, 'Url':t.href }; });"; var jsSelectMenuDescription = @"document.querySelector('header.menu-header p').innerText;"; var jsSelectMenuSectionTitles = @"Array.from(document.querySelectorAll('h4.menu-title a')) .map(item => { id = item.getAttribute('aria-controls'); title = item.querySelector('span').innerText; return { 'Id': id, 'Title': title }; });"; var results = await page.EvaluateExpressionAsync<MenuAnchor[]>(jsSelectAllMenuAnchors); var menuUrls = results.Where(m => !m.Url.Contains("wellbeing-boxes")).ToList(); foreach (var menuAnchor in menuUrls) { await page.GoToAsync(menuAnchor.Url); var menuDescription = await page.EvaluateExpressionAsync<string>(jsSelectMenuDescription); // .Replace("\n", " "); var menuHier = new MenuHier { Url = menuAnchor.Url, MenuTitle = menuAnchor.Title, MenuDescription = menuDescription, MenuSections = new List<MenuSection>() }; menuHierList.Add(menuHier); var menuSections = await page.EvaluateExpressionAsync<MenuSection[]>(jsSelectMenuSectionTitles); foreach (var item in menuSections) { var menuSection = new MenuSection { Title = item.Title, Id = item.Id, DishUrls = new List<string>() }; menuHier.MenuSections.Add(menuSection); var menuSectionDishUrls = await page.EvaluateExpressionAsync<string[]>($"Array.from(document.querySelectorAll('#{item.Id} a')).map(t=>t.href);"); foreach (var dishUrl in menuSectionDishUrls) { menuSection.DishUrls.Add(dishUrl); } } } await page.CloseAsync(); } Debug.WriteLine("STARTING DISH PAGES!"); using (Page page = await browser.NewPageAsync()) { var jsSelectDishName = @"document.querySelector('header h2').innerText;"; var jsSelectDishDescription = @"document.querySelector('article > div > p').innerText;"; // Visit each dish page foreach (var menuHier in menuHierList) { var menuSections = menuHier.MenuSections; foreach (var menuSection in menuSections) { var dishUrls = menuSection.DishUrls; foreach (var dishUrl in dishUrls) { await page.GoToAsync(dishUrl); string dishName = null; string dishDescription = null; bool hasError = false; try { dishName = await page.EvaluateExpressionAsync<string>(jsSelectDishName); dishDescription = await page.EvaluateExpressionAsync<string>(jsSelectDishDescription); } catch (Exception ex) { _logger.LogError(ex, ex.Message, null); hasError = true; } var dish = new Dish { MenuTitle = menuHier.MenuTitle, MenuDescription = menuHier.MenuDescription, MenuSectionTitle = menuSection.Title, DishName = dishName, DishDescription = dishDescription, HasError = hasError }; dishList.Add(dish); } } } await page.CloseAsync(); } } _logger.LogInformation($"Completed {dishList.Count} dishes."); return dishList; } } }
STACK_EDU
This will be a step-by-step guide for creating a simple CICD pipeline in Azure DevOps. Before we start make sure you have an Azure DevOps account and Azure portal subscription. In this article, I will be setting up the CICD pipeline to deploy a sample Angular+Node application to the Azure App service. You will need to create the following resources to follow through the tutorial: 1. Azure app service 2. Azure Repo 3. Service Connection in Azure DevOps to deploy to the Azure App service 4. Sample Angular+Node app to deploy Let’s start with the steps to create the deployment : - I’ve created a sample Angular+Node app to deploy and its structure is as shown below- The ‘backend’ folder has the Node app, and the ‘frontend’ folder contains the Angular app. You will also need to initialize a git repo here. 2. I have also created an ‘azure-pipelines.yml’ file. This file will define the stages to build the app and then deploy it to the correct environment. In this file, I have 4 stages to perform different tasks. - Stage 1: Build In this stage, I’m installing the required packages and libraries. Then building the Angular app and move the build to the ‘backend/dist’ folder from where it will be served by my Node app. Then as we only have to deploy the ‘backend’ folder to the App service, we will zip the backend as an artifact and publish it to the artifact staging directory. - Stage 2: Maintainance Environment Deployment This is a deployment stage, and we will be deploying the zipped artifact we created in the build stage. We will trigger this stage only if we are pushing changes to the MAINT branch, this is controlled by the condition in this stage. We also need to enter the ‘service connection name’ and the ‘app name’ in the deployment task to make the App service deployment. - Stage 3, 4: These stages are similar to stage 2, the only difference is they check for changes in different branches and make the deployment to their respective App service/Environment. 3. Once the setup is done you have to push your Angular+Node project to the Azure repository that you created on Azure DevOps. 4. Then from the Azure pipelines you have to create a new pipeline. Select the repo that you have pushed the code to and select the ‘azure-pipelines.yml’ file for the pipeline. 5. Since we have put the trigger as ‘develop, MAINT, QA, PROD’ the pipeline will get triggered whenever you have any changes in those branches. 6. Make sure you have an ideal agent to execute the pipeline, if you don’t have any Agent set up then you will need to create a manually hosted agent to run your pipeline. 7. You could also set up an approval process in Azure DevOps for the deployment. This could be done by going to the environments section in the Azure pipeline and defining the approvers for each environment. 8. You can test the pipeline by pushing any change to the trigger branch and the pipeline should be automatically triggered. The pipeline will start its execution and the deployment will be made to the App service. The final result will look as shown below. There you go! It’s that easy to build your own CICD pipeline. Drop a like and don’t forget to follow if you found the article helpful and keep building cooler stuff using DevOps!!
OPCFW_CODE
Patterns for JavaScript security with back-end authorization? I'm looking for some good resources, patterns and practices on how to handle basic security needs, such as authorization, in client side JavaScript. I'm building a website with a back-end system running one of the common MVC frameworks. The back-end will handle all of the real security needs: authorization and authentication. The front is going to be built with Backbone.js, jQuery and a few other libraries to facilitate a very rich user experience. Here's an example of one scenario I need to handle: I have a grid of data with a few buttons on top of it. If you select an item in the grid, certain buttons become enabled so you can take that action on the selected item. This functionality is easy to build... Now I need to consider authorization. The back-end server will only render the buttons that the user is allowed to work with. Additionally, the back-end server will check authorization when the user tries to take that action. ... so the back end is covered and the user won't be able to do what they try, if they are not authorized. But what about the JavaScript? If my code is set up with a bunch of jQuery click handlers or other events that enable and disable buttons, how do I handle the buttons not being there? Do I just write a bunch of ugly if statements checking for the existence of the button? Or do I write the JavaScript in a way that lets me only send the JavaScript for the buttons that exist, down to the browser, based on authorization? or ??? Now imagine a tree view that may or may not allow drag & drop functionality, based on authorization... and an add/edit form that may or may not exist based on authorization... and all those other complicated authorization needs, with a lot of JavaScript to run those pieces of the front end. I'm looking for resources, patterns and practices to handle these kinds of scenarios, where the back end handles the real authorization but the front end also needs to account for things not being there based on the authorization. Voting to migrate - this forum deals more with concrete code problems / solutions, rather than extended discussion comparing approaches. FWIW, Javascript is inherently insecure because it operates on the client machine. It can be used to compliment backend security, but should never be relied upon as a replacement for it. Generally you modular feature detection. The idea is that each page has a list of possible modules it needs, and each module checks the DOM for whether it's actually needed. Thus all your buttons become modules. This would be improved massively by using a templating engine that does this modular loading for you which of the StackExchange sites would be a better fit for this? @DerickBailey best fit would be StackOverflow There are three fairly straightforward things I could see: Modular Backbone Views I grew very fond of nested and highly modular Backbone views. That is, every row in your tree could be a Backbone view and reacting to their own authorization requirements. Multiple Event Hashes Set up multiple event hashes on the view that you switch out with delegateEvents() based on your authorization requirements and triggered by an event. That way you get around a set of ugly if statements. Multiple Templates In a similar vein, you specify multiple templates to render based on the authorization requirements. All three would require an event structure that's setup (for example using your own vent PubSub handler) where you trigger authorization checks based on the response of a RESTful server request, or based on some client-side function. I've also found that for some simple differences you can do it with css. Add some class or data-logged-in="true" to and than show/hide what you need based on that. The way I handle authentication on the client is to have a singleton Backbone model that holds an isAuthenticated value that I initially populate from the server with: @{ App.user.set({ isAuthenticated: @UserSession.IsAuthenticated.ToString().ToLower() }); } Then all javascript controls/functionality that change behavior based on Authentication basically just listen to changes on this field and re-renders themselves into the correct state. All this view/logic is done with JavaScript so it works at page generation time (by the server) or on the client with Javascript/ajax. I don't maintain event handlers to existing/hidden functionality, I re-create all the UI elements and re-hook up all the event handlers after the views are re-rendered based on the isAuthenticated flag. The nice thing is that once you login with ajax on the client (i.e. after the server page has been rendered) you just need to set the same field and as if by magic (Backbone FTW :) it all works and renders correctly. I would recommend using the Factory pattern together with Backbone's class properties to initialize different views depending on the authorization of the user. In the example below, I define a base GridView which contains all of the default and common behaviors. AdminGridView and EditorGridView contains the authorized user's specific functionality. In the simplified example below, a click handler will only be wired up for admins. The nice thing is that everything is encapsulated so that only the Factory needs to know about AdminGridView and EditorGridView. Your code would just interact with GridView. // GridView is an abstract class and should not be invoked directly var GridView = Backbone.View.extend({ // put all common / default code here _template: _.template($('#grid').html()), initialize: function(){ this.model.bind('change', this.render, this); }. onButtonClick: function(){ // do something }, render: function(){ $(this.el).html(this._template(this.model)); } }, { create: function (options) { switch (options.authorization.get('type')) { case 'admin': return new AdminGridView(options); case 'editor': return new EditorGridView(options); default: throw new Error('Authorization type ' + options.authorization.get('type') + ' not supported.'); } } }); var AdminGridView = GridView.extend({ // put admin specific code here events: { 'click .button': 'onButtonClick' } }); var EditorGridView = GridView.extend({ // put editor specific code here }); var authorization = new Backbone.Model({ type: 'admin' }); var gridView = GridView.create({ model: someModel, authorization: authorization }); $('body').append((gridView.render().el)) Anything on the client side can be hacked. So what we're doing in our singlepage js app is entitlements on the server. We combine a list of entitlements per user and have an OAuth token that we pass to the client and is sent back with each request. then in the server side before an action is taken we see if the user is authenticated for this action. Of course, this doesn't protect you from someone at a cafe with firesheep...but thats another problem. Oh, instead of if statements everywhere an alternative is to produce an object literal on the server side that's a table that can be used in conjunction with the strategy pattern. this way you're not writing if everywhere. The way we handle this is letting the server send us the html that needs to be rendered based on the the decision or decisions that result in a consequence. something like this this.ActionFor(Model) .Decision<Authentication1>() .Decision<Authentication2>() .Consequence<HTMLRenderer>() You could probably use this pattern similarly in JS if you think you needed to. right - i noted in the description that the server only renders what the user can do. how do you handle this in the javascript? We fire an event when a row is selected that sends the id of the entity and the server can then decide what button or buttons to render or nothing like this `onRowSelected: function (e, ui) { $('#entity-placeholder').load('/entity/manage/' + ui.data.Id, function () { //callback }); }` Since you mentioned backbone, your code should not be a bunch of jQuery click handlers but instead you can use models to store data about authorization and have your views react accordingly. +1... i was just trying to describe the problem in a way that would be easier for non-backbone people to weigh in. On client side, let JQuery take care of it. If the element is not there, it does not do anything. It only attaches event handlers to those elements that it finds, you will not need any if-an-element-exist check, just use the JQuery selectors.
STACK_EXCHANGE
C/C++ Software Engineer Alert Logic THIS JOB HAS EXPIRED Alert Logic is hiring for a C/C++ Software Engineer to join our team of highly skilled and motivated engineers that are developing our next generation SaaS platform based on proprietary cloud computing using C in a Linux environment. The platform will run on a multi node computational grid with virtually infinite horizontal scaling capabilities. A large portion of the cloud infrastructure is developed in Erlang and therefore allows a high level of parallelism. Alert Logic sits at the nexus of two of the hottest trends in IT: the adoption of cloud technologies and increased security and compliance requirements driven by an increasingly connected world. In a typical month, Alert Logic processes over 100 million security events and stores petabytes of data for over 1,300 enterprise customers. We are an established company with a history of almost ten years, yet maintain a pace, energy and agility that allows us to advance our offerings and technology and preserve a startup-like culture. Our revenues are strong. Our customer base is growing rapidly. Everything is in place except one thing: you. Design, develop and maintain client server architecture using C in a Linux environment Work independently with minimal supervision to solve complex engineering problems Work in a dynamic team environment to produce the design, design documentation, conduct design reviews and implement the design in a timely fashion and following coding guidelines and Agile development methodology Expert level skills in C programming and POSIX API (including usage of multi-threading constructs) Strong knowledge of UNIX/LINUX system level API design with strong background in writing network applications. Experience in Client/Server architecture and distributed systems development Proven experience delivering production level projects that are used by a large number of concurrent users Strong practical knowledge of network protocols such as IP, TCP, UDP and HTTP Well rounded skills with proven hands on experience with more than one Object Oriented Programming language such as C++, Java, Perl, etc. Hands on experience with functional programming languages such as Erlang is a big plus Understanding of network security including SSL (OpenSSL) Experience developing autonomous agents and multi-agent systems is a plus Strong relational database skills (Any experience with MySQL is a plus) Knowledge of Windows systems internals is a plus BS in CS, CE, EE or equivalent field is desired but not required ||1776 Yorktown | Houston, TX 77056 THIS JOB HAS EXPIRED Alert Logic's on-demand solutions provide the easiest way to secure networks and comply with policies and regulations by enabling our customers to detect threats, eliminate vulnerabilities, and manage log data.Investors: Access Venture Partners , DFJ Mercury , Hunt Ventures/Hunt BioVentures , OCA Ventures , Updata Partners All Jobs: at Alert Logic Houston, TX 77056 |Company Profile:||Alert Logic is the industry's leading provider of on-demand IT compliance and security solutions. Our solutions provide mid-sized organizations with the easiest way to secure networks and comply with policies and regulations. Our on-demand platform utilizes software-as-a-service to deliver the benefits of rapid deployment, zero maintenance, and no upfront capital costs. As a result, Alert Logic customers benefit from easy and affordable network security and compliance. Headquartered in Houston, Texas, Alert Logic is changing the way IT compliance and security solutions are designed, delivered, and utilized. | Support Alert Logic with Social Media services
OPCFW_CODE
As showed in my previous post I have set up a GitLab account so I can host my Git repositories there. In this post I will show how you can combine XCode (v7.1) with GitFlow and GitLab. In fact I will end up to use both XCode (for programming) and the Terminal (for my GitFlow) but that is the same when I am developing Java. In that case I also prefer the prompt most of the time for my (basic) Git stuff. It takes the following steps to setup your project: - Create the project in XCode - Create the project in GitLab - Link XCode project to GitLab project - Init Git Flow Each step will be showed in more details next. Create the project in XCode This is the most easy part. Just create a new project in XCode and make sure that Git is enabled in the project: Create the project in GitLab Next step is to create a corresponding project in GitLab. This is also quite straightforward: When the project is created copy the url to the repository: Now we have two separated project so lets connect them in the next step. Link XCode project to GitLab project In this step we connect the two projects together by setting the remote URL for the local Git project. This means that by pushing the local repo to the remote one it ends up in our GitLab repository we just created. To ‘connect’ these two repositories open the ‘Source Control’ menu item in XCode for our new project and navigate to the settings: Then select ‘Remotes’ and choose the option to add one: In the popup fill in the name of the remote repo (default called ‘origin’) and the url you copied in GitLab for the new project: Now you can commit and push your changes to the remote repository from XCode by selecting first ‘Commit’ and then ‘Push’ in the ‘Source Control’ menu: So now we have XCode and GitLab connected. It’s time to put some Flow into it. Init Git Flow Open the Terminal in Mac and browse to the ‘MyGitProject’ directory. Execute the command: git flow init to initiate gitFlow for this project: Now push the changes either in Terminal or XCode to the remote server. As you can see we are now working on our ‘Develop’ branch: That’s it. If you look into GitLab again you will see our project has now two branches: The way to go now is to create a feature branche in the Terminal with git flow feature start SetupStoryBoard Now make your changes in XCode, commit them and when you are done finish the feature with git flow feature finish SetupStoryBoard and push the changes in the develop branch to the remote server. Lots more of Git fun with XCode can be found here, here and here. More info why GitFlow with iOS here.
OPCFW_CODE
Research if hamstr can be integrated with Hashed Network/Portal Hamstr (https://github.com/styppo/hamstr) is a twitter-style Nostr client developed in Quasar. Nostr uses the same signatures (Schnorr) as Hashed Network. Instead of signing a transaction, users sign an event. It would be super cool to integrate Hamstr into the Portal using the same authentication. We can run our own Nostr relay (we already do at wss://r1.hashed.systems) I think the first step would be to see if we can use the same polkadot.js signature library to sign and submit events to a nostr relay. If that works, then it should just be a matter of connecting the dots and implementing a Polkadot signing experience within a fork of Hamstr. How do you envision it being used in the Portal @3yekn? I am curious to see how similar signing is, if signature providers between Substrate and nostr are interchangeable. This will inform our signer road map. We should have a way to message to token holders that is gated. Even if it isn't confidential information AND the HASH token isn't a security, I think there are compliance rules about what can and can't be said publicly. If we can communicate on nostr and gate the relay so only people proven to have HASH tokens can join, that is a helpful guard. Token holders will likely be coming to the portal to maybe claim tokens and maybe engage with our products. Nostr is super engaging and the fastest growing use case in the space. Having that integrated likely increases our DAU which is the most critical metric. Nostr would basically supplant all the functionality found on something like Subsocial. Hamstr is written in Quasar so it seems like it would be the easiest to integrate. Those are some of trends and themes. In terms of precisely how it would be used on the short term, I think it is to communicate with token holders as a "Hashed Network" feed of information, and it would inform our product road map. Like we discussed yesterday, I can imagine nostr replacing the Greymass buoy server and some of the other tooling to greatly simplify the new Signer product. But I am not sure, there are lots of research tangents to this so it is highly exploratory. Thanks for the detailed explanation @3yekn! So the first task would be to determine if its possible to sign a nostr event with the polkadotjs wallet, with a substrate account? @sebastianmontero Yes, that is the first experiment, which maybe can be done just with the polkadot CLI or polkadotjs application utilities. Technically it is the same signature type, but I don't know how the payload might be manipulated (e.g. hashed) before it is signed. This is the spec on how to sign an event. https://github.com/nostr-protocol/nips/blob/master/01.md Here is the list of libraries, where nostr-tools is a popular JS client. https://github.com/aljazceru/awesome-nostr#libraries I have been building with this rust one and it is pretty good: https://github.com/rust-nostr/nostr The inverse is also a good experiment: signing a Substrate transaction with a library that can be used to also sign a nostr event. Or maybe not the exact same library, but the same secret mnemonic. With coinstr, I used the same mnemonic and Rust code to sign a nostr event and a multisig bitcoin transaction. This illustrates how a user can do both with the same app and same key. IMHO, an amazing goal is having one app and one secret mnemonic that can sign Substrate, Nostr, and Bitcoin transactions. Then we could use Nostr Connect and the relay to communicate between web apps and that app to present any of these requests to the user for a common UX.
GITHUB_ARCHIVE
# frozen_string_literal: true module Filterable extend ActiveSupport::Concern module ClassMethods # Filter and sort an ActiveRecord table with the provided params # Include on an ApplicationRecord object via: # 'include Filterable' # # Filter and sort on parameters like: # 'ObjectName.filter_and_sort(params.slice(:example1), params.slice(:order, :order_dir))' # # Override #default_sort to modify the default sort order if no/invalid order params are supplied # Override #base_query to change the root query for the table, e.g. to include relations # # @param filtering_params The parameters to filter the table on # @param ordering_params The parameters to order the table on def filter_and_sort(filtering_params, ordering_params) results = base_query filtering_params.each do |key, value| if value.present? results = results.public_send("filter_by_#{key}", value) end end # filter field and direction to known values order_field = ordering_params[:order] order_direction = ordering_params[:order_dir] unless order_field.present? && order_direction.present? return default_sort(results) end return default_sort(results) unless (column_names + dynamic_fields).include?(order_field) unless %w[asc desc].include?(order_direction.downcase) return default_sort(results) end # check for overridden order scope if respond_to?("order_by_#{order_field}") return results.public_send("order_by_#{order_field}", order_direction) end results.order("#{ordering_params[:order]} #{ordering_params[:order_dir]}") end # define a default sort order on a query def default_sort(results) results end # define a base query for the type def base_query where(nil) end # define an empty array for dynamic fields def dynamic_fields [] end end end
STACK_EDU
The SIAM Conference on Computational Science and Engineering 2021 (CSE21) is coming up in March (03/01 - 03/05). As all other meeting these days, CSE21 will be a virtual event. We will present a poster titled “Edge: Development and Verification of a Large-Scale Wave Propagation Software”. Topic are learned lessons when conducting large-scale seismic forward simulations. Specifically, we’ll discuss our workflow resulting from series of high-frequency ground motion simulations of the 2014 M5.1 La Habra, California earthquake using the Extreme-scale Discontinuous Galerkin Environment (EDGE). Intel’s Alex Heinecke will present recent research on Deep Learning and Tensor Contractions as part of the virtual course “Parallel Programming I” at Friedrich Schiller University Jena. The presentation is on Dec. 18, 2020 and starts at 08:00AM CET. Interested students outside of this specific course are welcome to join and may express their interest to participate by writing an e-mail to firstname.lastname@example.org. Abstract: Deep learning (DL) is one of the most prominent branches of machine learning. Due to the immense computational cost of DL workloads, industry and academia have developed DL libraries with highly-specialized kernels for each workload/architecture, leading to numerous, complex code-bases that strive for performance, yet they are hard to maintain and do not generalize. In this work, we introduce the batch-reduce GEMM kernel and its elementwise counter parts, together the so-call Tensor Processing Primitives (TPP), and show how the most popular DL algorithms can be formulated TPPs their basic building-blocks. Consequently, the DL library-development degenerates to mere (potentially automatic) tuning of loops around this sole optimized kernel. By exploiting TTPs we implement Recurrent Neural Networks, Convolution Neural Networks and Multilayer Perceptron training and inference primitives in high-level code only fashion. Our primitives outperform vendor-optimized libraries on multi-node CPU clusters. Finally, we demonstrate that the batch-reduce GEMM kernel within a tensor compiler yields high-performance CNN primitives, further amplifying the viability of our approach. About the Speaker: Alexander Heinecke is a Principal Engineer at Intel’s Parallel Computing Lab in Santa Clara, CA, USA. His core research is in hardware-software co-design for scientific computing and deep learning. Applications under investigation are complexly structured, normally adaptive, numerical methods which are quite difficult to parallelize. Special focus is hereby given to deep learning primitives such as CNN, RNN/LSTM and MLPs and as well to their usage in applications ranging from various ResNets to BERT. Before joining Intel Labs, Alexander studied Computer Science and Finance and Information Management at Technical University of Munich (TUM), Germany, and in 2013 he finished his Ph.D. studies at TUM. Alexander was awarded the Intel Doctoral Student Honor Program Award in 2012. In 2013 and 2014 he and his co-authors received the PRACE ISC Award for achieving peta-scale performance in the fields of molecular dynamics and seismic hazard modelling on more than 140,000 cores. In 2014, he and his co-authors were additional selected as Gordon Bell finalists for running multi-physics earthquake simulations at multi-petaflop performance on more than 1.5 millions of cores. He also received 2 Intel Labs Gordy Awards and 1 Intel Achievement Award. Alexander has more than 50 peer-reviewed publications, 6 granted patents more than 50 pending patent applications. Our extended abstract for SEG 2020 is available from the SEG Library. The respective pre-recorded presentation will stream on October 12, 2020 at 3:55PM. During the presentation we will answer questions in chat.
OPCFW_CODE
"Message is incomplete" error Hello! I just upgraded to 1.0.0 from 1.0.0-rc1-final on both the front-end and back-end. With no other code changes, we started getting an error being thrown in signalr.js: "Error: Message is incomplete. at Function.TextMessageFormat.parse (https://localhost:44361/lib/aspnet-signalr/signalr.js:1404:19) at JsonHubProtocol.parseMessages (https://localhost:44361/lib/aspnet-signalr/signalr.js:3082:62) at HubConnection.processIncomingData (https://localhost:44361/lib/aspnet-signalr/signalr.js:1940:42) at WebSocketTransport.HubConnection.connection.onreceive (https://localhost:44361/lib/aspnet-signalr/signalr.js:1760:68) at WebSocket.webSocket.onmessage (https://localhost:44361/lib/aspnet-signalr/signalr.js:2619:47)" I added a console.log to that TextMessageFormat.parse method to see the input, and I see that it is a C# object that is being broken into half. That is, the first message is of length 2,049 characters. Then that exception is thrown. Then I immediately get the next message that is about the same length (2,046 characters), followed by the same exception. Finally, I get the 3rd, and final message that completes the C# object...and that throws an exception of: "Unexpected token f in JSON at position 3" Again, this is the same code that was working for 1.0.0-rc1-final. Wondering if anybody has any insights about what might be going on? Is there a trick for how I should be passing the data to the client that has changed since rc1? My method of passing the data to the front-end, from the hub, follows: await Clients.Client(Context.ConnectionId).SendAsync("FullRsvpListOfAttendees", attendees.Select(n => new { AttendeeId = n.AttendeeId, FirstName = n.AttendeeFirstName, LastName = n.AttendeeLastName, IsMuted = n.IsMuted }).ToList()); Thank you! Leaving tactical dot here . Server and client logs (also described in the Diagnostics Guide) would also help. Attached! It's hard to tell for sure, but it looks, to me, like the "TextMessageFormat.RecordSeparator" is missing from the end of the message that is being sent from the server to the client. Hopefully I properly generated these logs. Please let me know if there's anything more I can provide, or if you would like to take 5/10 minutes for a quick Zoom/screenshare to show you live. Server Log.txt Client Log.txt localhost.har.txt Ahhh, it appears to be a problem with the Azure SignalR Service - I turned that off, and everything works great. Then I turn it back on and we get the errors back again. Is this the right place to leave this error report? I don't see a GitHub repo for the AzureSignalR to leave issue reports. You can file issues at https://github.com/Azure/azure-signalr/issues Not sure how I missed that. Thx, I'll close this and report there.
GITHUB_ARCHIVE
sh make.sh error ImportError: libffi-45372312.so.6.0.4: cannot open shared object file: No such file or directory when do sh make.sh and then i meet this problem : sh make.sh running build_ext skipping 'model/utils/bbox.c' Cython extension (up-to-date) skipping 'pycocotools/_mask.c' Cython extension (up-to-date) Compiling nms kernels by nvcc... Including CUDA code. /home/CAD409/teng/object_detect/faster_rcnn/lib/model/nms ['/home/CAD409/teng/object_detect/faster_rcnn/lib/model/nms/src/nms_cuda_kernel.cu.o'] Traceback (most recent call last): File "build.py", line 33, in extra_objects=extra_objects File "/usr/lib64/python2.7/site-packages/torch/utils/ffi/init.py", line 176, in create_extension ffi = cffi.FFI() File "/usr/lib64/python2.7/site-packages/cffi/api.py", line 46, in init import _cffi_backend as backend ImportError: libffi-45372312.so.6.0.4: cannot open shared object file: No such file or directory Compiling roi pooling kernels by nvcc... /home/CAD409/teng/object_detect/faster_rcnn/lib/model/roi_pooling Including CUDA code. Traceback (most recent call last): File "build.py", line 32, in extra_objects=extra_objects File "/usr/lib64/python2.7/site-packages/torch/utils/ffi/init.py", line 176, in create_extension ffi = cffi.FFI() File "/usr/lib64/python2.7/site-packages/cffi/api.py", line 46, in init import _cffi_backend as backend ImportError: libffi-45372312.so.6.0.4: cannot open shared object file: No such file or directory Compiling roi align kernels by nvcc... /home/CAD409/teng/object_detect/faster_rcnn/lib/model/roi_align Including CUDA code. Traceback (most recent call last): File "build.py", line 34, in extra_objects=extra_objects File "/usr/lib64/python2.7/site-packages/torch/utils/ffi/init.py", line 176, in create_extension ffi = cffi.FFI() File "/usr/lib64/python2.7/site-packages/cffi/api.py", line 46, in init import _cffi_backend as backend ImportError: libffi-45372312.so.6.0.4: cannot open shared object file: No such file or directory Compiling roi crop kernels by nvcc... Including CUDA code. /home/CAD409/teng/object_detect/faster_rcnn/lib/model/roi_crop Traceback (most recent call last): File "build.py", line 32, in extra_objects=extra_objects File "/usr/lib64/python2.7/site-packages/torch/utils/ffi/init.py", line 176, in create_extension ffi = cffi.FFI() File "/usr/lib64/python2.7/site-packages/cffi/api.py", line 46, in init import _cffi_backend as backend ImportError: libffi-45372312.so.6.0.4: cannot open shared object file: No such file or directory somebody know how to solve this problem ? thanks ! got the same error @everythoughthelps @114153 did you solve this?
GITHUB_ARCHIVE
import vtk.term as term class Style: def __init__(self): self.code = 0 def _render(self): return "{}[{}m".format(term.ESCAPE_CHARACTER, self.code) class Style_Reset(Style): def __init__(self): super() self.code = 0 class Style_Bold(Style): def __init__(self): super() self.code = 1 class Style_Faint(Style): def __init__(self): super() self.code = 2 class Style_Italic(Style): def __init__(self): super() self.code = 3 class Style_Underline(Style): def __init__(self): super() self.code = 4 class Style_Invert(Style): def __init__(self): super() self.code = 7 class Style_Strikethrough(Style): def __init__(self): super() self.code = 9 class Colour(Style): def __init__(self): super() def _render(self, isForeground = True): if isForeground: return "{}[{}m".format(term.ESCAPE_CHARACTER, self.code) else: return "{}[{}m".format(term.ESCAPE_CHARACTER, self.code + 10) class Colour_Transparent(Colour): def _render(self, isForeground = True): return "" class Colour_Black(Colour): def __init__(self): super() self.code = 30 class Colour_DarkRed(Colour): def __init__(self): super() self.code = 31 class Colour_DarkGreen(Colour): def __init__(self): super() self.code = 32 class Colour_DarkYellow(Colour): def __init__(self): super() self.code = 33 class Colour_DarkBlue(Colour): def __init__(self): super() self.code = 34 class Colour_DarkMagenta(Colour): def __init__(self): super() self.code = 35 class Colour_DarkCyan(Colour): def __init__(self): super() self.code = 36 class Colour_Grey(Colour): def __init__(self): super() self.code = 37 class Colour_DarkGrey(Colour): def __init__(self): super() self.code = 90 class Colour_Red(Colour): def __init__(self): super() self.code = 91 class Colour_Green(Colour): def __init__(self): super() self.code = 92 class Colour_Yellow(Colour): def __init__(self): super() self.code = 93 class Colour_Blue(Colour): def __init__(self): super() self.code = 94 class Colour_Magenta(Colour): def __init__(self): super() self.code = 95 class Colour_Cyan(Colour): def __init__(self): super() self.code = 96 class Colour_White(Colour): def __init__(self): super() self.code = 97
STACK_EDU
Name of this type of plot? Does anyone know how to produce it Does this type of polar plot have a name? Does anyone know how to produce it in octave 3.8.1 which is compatible with matlab? Link to site It looks like the envelope of a family of straight lines. The picture you are showing appears to be one from Cristian Vasile. I believe the one you have shown is "Flow Of Life Flow Of Pi" "The art of Pi" by Martin Krzywinski is a similar piece of artwork, He has similar images for $e$ and $\phi$. On that linked page it mentions that he created his images using the free, open source (GPL) "Circos" data visualization software package that he maintains, and has a page describing the mathematical methods used to create it, and how he modified Vasile's method slightly to create his versions, specifically adding outer rings to identify the frequency of certain number repetitions and pairings. (He specifically highlights the 99999 that occurs at position 762). From his descriptions of Vasile's method, describes the process as jumping from number to number in the order of the digits in $\pi$. Thus, for 3.14159, it starts at the most counterclockwise position on 3, then follows an arc to 1, where it lands at position 2, then it arcs to 4, where it lands at position 3, etc. The actual locations of positions 1,2,3, etc, appear to be relative to the number of digits calculated. Hence, if you calculated 100 digits of $\pi$, each number would have 100 "landing spots" for each arc. I believe the image you linked uses 10,000 digits. The sweep of the arc not described, but except for adjacent digits the arcs all appear to be circular (constant radius), which creates the apparent concentric rings near the center where the arcs peak. The outermost ring, created by adjacent digits, appears to be made using elliptical arcs. The colors on the arcs are simple gradients from one digit color to the other. As the Circos code is free, open source software licensed under the GPL, anyone is free to view the source code and copy the mathematical algorithms. I don't believe anyone has generated a Matlab or Octave codeset to generate similar graphics. It should be possible, however the most difficulty will probably be in mapping the proper outward facing circular arcs onto a polar map. It would require some apparently nontrivial trigonometry. I have also not seen gradients in plotlines in either gnuplot or fltk graphics options.
STACK_EXCHANGE
Your system, application, and/or your infrastructure are going to have problems. Once you have your infrastructure under control, you will want to understand what’s happening in the system through Observability. The goal is to prevent as many issues as possible, but the reality is that problems are unavoidable. What’s most important is detecting problems before they impact your business or users. This is when monitoring and, more importantly, Observability come into play. Monitoring is something you and your team actively do. You will monitor your system to detect problems, which might mean running tests to check the availability and performance of your system. Observability is, on the other hand, a property of your system that uses outputs to understand what is going on inside. If your system doesn’t externalize its internal state, no amount of monitoring will help you detect specific problems in time. It is about knowing not only what is happening in your system but why it is happening. Using Observability has advantages beyond simply understanding what’s going on in your system, application, and infrastructure. Some problems can be detected early before they are noticeable. Internal latency, too many locks in the database, or any problem you see on the backend may become noticeable if you do take any action. Detection and communication are key here. You want to detect the problems before your users, and you will also want to let them know if the issue will affect them. Letting the world know that you are actively monitoring and troubleshooting any given problem will help put your customers at ease. You will be able to debug and troubleshoot outages, service degradations, tackle bugs, (un-)authorized activity, etc. It also helps to understand the uptime of your SaaS and the quality of service your users have. Metrics are a crucial component of Observability, and businesses. They help us improve the application and infrastructure. We are constantly looking for more ways to add metrics in order to help us understand how systems work. For example, latency or responsiveness can be determined with metrics. When observability and metrics are applied across the board, they must show that you are addressing all the challenges of managing your infrastructure properly: availability, productivity, costs, security, compliance, and scalability. Dashboards and correct visualizations are key. Once appropriately implemented and visualized, metrics impact the performance and quality of your business. As discussed above, when you start implementing Observability, you will look at metrics and a combination of monitoring, log centralization, and tracing. Monitoring: Run active checks to verify the availability of critical components. Metrics: Collect data points to count errors, load, and other variables from all the components required to run your application and the application itself. You will obtain your metrics from your application and required services, infrastructure, pipelines, costs, and security incidents. Log Centralization: Record information about events in your system in a central location. Tracing: Refers to tracking and identifying problems in requests crossing through your system. At Flugel, we’ve put together an Observability and metrics checklist to help you detect problems before your clients notice them. By putting this checklist in place, you will begin to understand how well observability and monitoring are working in your organization. Over time you will gain insight into the health and performance of your products, process, and people. 2018, Cryptoland Theme by Artureanec - Ninetheme
OPCFW_CODE
In the beginning of June I wrote a post about how to enable Third-Party Software Updates in SCCM Technical Preview 1806 without using SCUP. This week another release of SCCM Technical Preview hit the streets. 1806 in a second edition, also called 1806.2. This release further iterates on support for Third-Party Software Updates, and now enables us to add custom catalogs such as Adobe. In this post I will walk you through how to do just that, and show you how to add the Adobe catalog for Acrobat Reader DC to Configuration Manager and thus allowing us to deploy updates for Adobe natively without using SCUP. I’m going to assume that third-party updates are configured and working in your end already. If not, you should start with my previous post right here: https://imab.dk/enable-third-party-software-updates-in-configuration-manager-technical-preview-1806/ To get this started, right click on Third-Party Software Update Catalog in the Software Library work space of the Configuration Manager console. See below picture: - Fill out the details for the custom catalog. In this scenario, I have filled out with following details: - Download URL: https://armmf.adobe.com/arm-manifests/win/SCUP/ReaderCatalog-2017.cab - Publisher: Adobe - Name: Adobe Reader Classic - Description: Adobe Reader Classic custom catalog - Finish the wizard ending up with below picture selecting Close - Next step is to subscribe to the newly added Adobe catalog. Do so by right clicking on it and selecting Subscribe to Catalog - Complete the subscription process following my below pictures. I have taken snips of each steps and highlighted the required actions. Note: This is exactly the same process as when subscribing to the built-in HP (Hewlett Packard) catalog. - After finishing off the subscription, select to sync the new update catalog on Sync now - Monitor following server side log file: <ConfigMgr InstallDir>\Logs\SMS_ISVUPDATES_SYNCAGENT.log. Once the sync has completed, you will see following entries in the log: - Next, when the syncing in above is done, enable the new product on the properties of your Software Update Point in Configuration Manager Do another synchronization of Software Updates and monitor <ConfigMgr InstallDir>\Logs\wsyncmgr.log and wait for your Adobe updates to show up in the All Software Updates node - Tadaaa. Adobe Software Updates right there in your Configuration Manager console for you to deploy without using SCUP or any other third-party product And that’s a wrap for this specific scenario in System Center Configuration Manager Technical Preview 1806.2. I hope this helpful and informative 🙂
OPCFW_CODE
Hello peterm and thanks Well what a ****** nightmare its been - now the story Did you use the cable that came with the router or did you use the cross over cable. I assume you used the the one that came with the router which is the correct one to use. It wasn't the one that came with the Hub(router) but it's NOT a crossover cable - it was used to connect the Xbox and one PC before wireless Are both computers near the router ? this is this should work so easy I am thinking more and more it's to do with something outside the router. Are the 2 computers still close enough to connect via cross over cable? Yes they are within 5 feet (do you use feet and inches where you are thats approx a metre and a half) Lets try the cable into the router again please. Whichever computer is closest to the router can you plug it in (not the cross over cable) Start the computer and do an ipconfig /all. If the ipconfig does not come back with ip between 192.168.1.64 - 192.168.1.253 then go in my network places click on the local area connection click on propertities pick the tcpip click on propertities and choose use the following Ip . then enter IP 192.168.1.70 subnet 255.255.255.0 gateway 10.1.1.38 click ok click ok Can you still get the www. I chose the Systemax (the closest PC) powered down unplugged the dongle plugged in the ethernet cable and powered up. Everything seemed ok so I did an ipconfig/all which showed IP address as 192.168.1.68 but no WWW Now things start to go wrong - I broke my golden rule DONT DO ANYTHING YOU ARE NOT TOLD TO WHEN YOU ARE NOT THE EXPERT I thought I would try your settings anyway which I did - still no WWW access. Then a 'Help pop box' appeared from my ISP saying do you want us to fix your access problem. Now like a complete fool I clicked 'Fix' this then went though 'tests' and ended up saying we cant fix it !!!! -still no WWW I then changed back the Network settings to 'Obtain an IP address automatically' switched off the Systemax unplugged the cable and plugged in the dongle. I switched on again and found I still no WWW. At this point I found I could not get the WWW on the Dell either !!!! - I confess at this time the air was very blue indeed !! Everthing appeared ok all the tray icons were ok so in desperation I used the installation CD for my ISP connection in the Systemax this went ok until during the checking stage I was advised that my either my 'wireless key' was wrong or my Hub(router) was unreachable. All lights on the Hub have all been ok thoughout this whole sorry tale - including getting the light come up to advise that a cable was in use. I then powered down the Hub repowered it and much to my relief I got the WWW back on both PCs The network connections window does however show the 'Internet connection' under heading 'Internet gateway' as disabled - this is I think the Xbox cable connection - I'll get my son to try the Xbox (on wireless) next time I see him (Soccer tonight!) You said that you had the computer connected before via cable thats what I am hopeing for again. My plan is to see if 1 computer can connect via the cable then we connect the 2nd computer via cable then see if we can file share. If we can get that to work then we try step by step for wireless. If this does not help then we try crossover to elimate hub I had the Dell connected by cable before I did not use that today as I was afraid of loosing the WWW. I was wrong anyway. I may get you to delete zone alarm not just turn it off(but last resort). That will not be a problem on the Systemax as its a 'free use ' download but on the Dell its part of the ISP suite of 'Procection' software but we can look at that later if you want. All this does not help you at all does it - let me know what you want to do -any screen pics etc - do you want me to try the Dell on the cable again - not to happy with that one but if it eliminates something I'll give it a go. Thanks for all your help I do appreciate it.
OPCFW_CODE
KNQL Does not query exact phrases containing multiple tokens/words Plugin version: 6.2.1 Kibana version: 6.2.1 Plugins installed: Elastic search 6.2.0 Description of the problem including expected versus actual behavior: Unable to query body of the document for multi-phrase. If nesting is turned off, this works fine. Example: finding "test post" from "this is a test post" Expected behavior: find posts that contains the phrase Actual behavior: "No results found" The only way to fix this is to use keyword instead of text as the type, and then query body.keyword~="the post" Steps to reproduce: add posts curl -XPUT 'localhost:9200/vs_views/_doc/1?refresh&pretty' -H 'Content-Type: application/json' -d' { "body": "This is a text of the post", "post_date": "2017-01-01", } curl -XPUT 'localhost:9200/vs_views/_doc/2?refresh&pretty' -H 'Content-Type: application/json' -d' { "body": "This is a another text of the post", "post_date": "2016-12-01", } 2. in kibana "Management" --> index pattern --> create index pattern; and then turn on nested fields 3. in kibana "Discover" search 'body="the post"' Errors in browser console (if relevant): No results found Unfortunately I could not find any results matching your search. I tried really hard. I looked all over the place and frankly, I just couldn't find anything good. Help me, help you. Here are some ideas: Refine your query The search bar at the top uses Elasticsearch's support for Lucene Query String syntax. Let's say we're searching web server logs that have been parsed into a few fields. Examples: Find requests that contain the number 200, in any field: 200 Or we can search in a specific field. Find 200 in the status field: status:200 Find all status codes between 400-499: status:[400 TO 499] Find status codes 400-499 with the extension php: status:[400 TO 499] AND extension:PHP Or HTML status:[400 TO 499] AND (extension:php OR extension:html) Provide logs and/or server output (if relevant): If using docker, include your docker file: Normal query using lucene query syntax without nested support Activating nested support on an index, same query now cannot find the multi-token phrase Thanks for the report, this looks like a limitation in the KNQL language. The equals(=) operator is a term query, the LIKE(~=) is wildcard. I did not code in a match_phrase into the language, as it wasn't a requirement at the time I built this out. It would seem to me that the match_phrase is similar in behavior to a wildcard query without the actual wildcard operators. I.e. Re-using the concept of a LIKE makes sense in the language. With this in mind, I will update the parser to check the value of the string passed into the LIKE expression and use wildcard if a * or ? is present, otherwise it will use match_phrase. Thank you Pierre! Just to be clear, it needs to be able to match MULTIPLE tokens, not just a SINGLE token with wild cards. Also, alternatively, could the Nested Support be used without KNQL? Annie Annie Yep. I've got the code working with your example.. it really was only about 10-12 lines of code to make the change. Unfortunately, KNQL is required for indices with nested fields. The reason for this is the need to pre-parse the query and generate a native elastic search query that injects the correct nested path information into the query. In order to do this, the plugin gathers more information about the index from elastic search, and keeps track of the field information. When you build a query, the plugin's parser uses its knowledge of the index's mapping to properly generate nested queries without you having to manually inject additional information. This breaks down under certain circumstances, as stated in the KNQL documentation. Hence the need for the EXISTS keyword to properly scope nested queries. So, I'm sure your wondering why the lucene query language in Kibana doesn't work with nested fields. The main reason is that Kibana doesn't actually parse the query, it just blindly passes it to elastic search and has elastic search do the processing. Additionally, the lucene query language has no concept around nested fields built into it. I've release 6.2.1-1.0.2 which should solve this issue. Please install the new version and let me know if it works for you. I believe this is now resolved and back ported to all releases. Please let me know if this is still an issue and I'll reopen. Yes, double checked it works for my test case! I can get phrase matching by 'body~="the post"'. Also checked in the visualization and works with filters. Thank you! On Thu, Feb 22, 2018 at 9:36 PM, Pierre Padovani<EMAIL_ADDRESS>wrote: Closed #43 https://github.com/ppadovani/KibanaNestedSupportPlugin/issues/43. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ppadovani/KibanaNestedSupportPlugin/issues/43#event-1488314471, or mute the thread https://github.com/notifications/unsubscribe-auth/AYFG6omMPllV0MqSV174Rhp_rlWQNkbTks5tXiQ7gaJpZM4SOA6X . -- .................................................................................................. | Dr. Annie En-Shiun Lee (PhD) | Lead Research Scientist | | Data Science | VerticalScope Inc |<PHONE_NUMBER> x403 | | wemine.uwaterloo.ca | linkedin.com/in/wemine |
GITHUB_ARCHIVE
That's a ton of questions right there :) Two things are most important for all of these questions/cases, which by the way also apply to most other cases: I) Answer the question. That will put you ahead of ~60% of other applicants (I'm serious). If the question is about "profit", your answer at the very end should be a number (with a unit!) for the profit amount. One of my favorite questions in the personal fit part of the interview starts with "Name 3 adjectives that..." -- you wouldn't believe how many candidates fail to name 3 (mostly they just come up with 2, but I have gotten 5), or fail to name adjectives. While not getting it 100% right doesn't have to be a dealbreaker per se, there is really no need to make this mistake. And this is what consulting in general is about: Answering specific questions with specific answers. For this case, there's very little context about the industry or anything, so you could structure it as follows (just one possible option off the top of my head): - Define the goal. The goal is to convince the CEO to hire your firm for a project. Repeating the goal of the case shows you're working in the right direction, keeps you from getting off track, serves as step 1 out of 3 (only 2 steps remaining, you always need 3 steps for ANYTHING in consulting :), and brings 45 seconds thinking time. - Define requirements to reach the goal. If you want to sell a project, requirement is that you offer value to the CEO (greater than the consulting fees). So, what would the CEO pay for? To get one of his most important problems solved. So you need to find out what they are. Possible questions (after small talk) to find out are "What keeps you up at night?" (a bit overused), "If you had one free wish/a genie came to you to solve one of your problems, which would it be?" (better maybe even: "take one of your issues off your plate") or "What are the most difficult problems in your industry right now - do they also apply to you?", or, the scare tactic, "Where is your competition better than you?". That should get the CEO talking, and you can ask further and more detailed questions to understand his problems. (Maybe there is a better term than requirements, but you get the idea). - Offer proposal. Once you understand the problems the CEO is facing, you would structure a proposal how to help him, answering WHAT you can do, and WHY you can do it. "What" is basically throwing expertise and (consulting!) resources at the problem, "Why" has to convey confidence that you're the right person to do it, appealing to the CEO's emotions. It's basically a variation of "we've done it before" (in an adjacent industry, etc.). So, if the question is asked the way it is, my approach in the interview would be to explain the structure first (basically, the three steps in bold), and then detail them out. Alternatively, detail 1) out to cash in the thinking time, and then show the structure for 2) and 3), and then afterwards detail 2) and 3) out. Then, at the end, ANSWER THE QUESTION! People call this "summarizing", but it's not, it's basically showing that you know what the problem was and to give the answer. In this case the answer to "how would you convince the CEO to hire your firm for a project", aka, the goal from 1). Why am I stressing this? The answer to this is not 3) the proposal -- the case question is about an overall approach to the CEO talk, so the answer needs to include all three steps. By the way, the question would be even more difficult as a roleplay, because then you need to do all of this in your head, and package it into a nice and friendly (= read "non-awkward") conversation. II) Show confidence. This is the second most important thing in all interviews. It subsumes: - Being in charge - the interviewee leads the case. - Posture/sitting up straight. - Clear, confident voice. - Being "natural" -> a goldilocks state of perfect moderation (not too loud, not too quiet. Not talking not too much, not too little. Not too fast, not too slow. ... ) - Be confident enough to listen to the interviewer, in most cases, she or he will try to be helpful (but don't "beg" for guidance). Why is this especially important in consulting? Most of your clients are C-level types, typically alpha-females and -males. Many are used to having staff who look up to them or are scared to speak out. One of the roles in consulting is to speak to these executives as an equal, being a true partner and guide. And if you show submissive behavior, they don't take you seriously. And for sure you don't sell projects then. (Ok, this is a bit of an overgeneralization, but you get the general message.) Since you mention "panicking" in your question, one trick I learned prior to my interviews years ago to help me was this: Think about a similar situation where you felt confident and in complete control. For example, a talk you gave in class about a topic you knew in and out, or a presentation you aced. Think about how you felt and behaved in this situation. (Posture, breathing, voice, ...). Get yourself in that state, and try to "save" it into a specific gesture -- my gesture was pinching my left thumb with my right thumb & index finger. Repeat and practice that 3 times a day for 5 mins each. Once you're in the interview and feel panicked/stressed, recall that feeling by secretly performing the gesture. You really need to physically practice this until it becomes subconscious, otherwise it won't work -- it's of no use that you get the concept intellectually! It actually really helped me, despite not being a fan of esoteric stuff in general, but hey, whatever works! ;) Sorry for the long answer, but I didn't have the time to write a short one :) (Pascal) All the best for your interviews!
OPCFW_CODE
How to pass data from page list to a details page I'm working with Phonon and Riot and reached a point where I am not sure what to do. I have a list displayed of Topics each which has a unique topic_id. When I load my details I can pickup the id from my uri but the Riot script code will only execute once on the first load. After that no mater what if I go back and pick a different topic it never executes the Riot code again so the data is always the same as the first time. Suggestions? Hi, Do you use page events such as create or ready? Inside of the ready event for example, you can call update() to update your view: http://riotjs.com/api/#updating The "ready" event is called when the page is becoming visible to the user (every time), so you can perform your update here. Hope it solves your problem! Let me know :) I did figure out that is what I needed to do and I do have it working now, thanks. Is there any way to reference the riot context for the opts? The only way I could figure to do it was in my tag file script to create a global for the tag ie; tagTC = this - so I could update the template from the ready state of the Phonon page through the tagTC. You can call the update() function with the current context (this) or with the global namespace (riot): this.update() // inside the tag riot.update() // global update To my knowledge, keeping the tag's context with a global variable is the best thing to do because riot.update() is not "efficient" if we want to update only 1 tag. Concerning Phonon, it uses a bridge for example to (auto)-update tags (opts + views) when the app language has changed when javascript phonon.updateLocale("NEW_LANG"); is called. To update opts inside tags, a "bridge" has been created called "tagManager". You can use it because it is register as a public module, so you can trigger events like so: phonon.tagManager.trigger("PAGE_NAME", "EVENT_NAME", "EVENT_PARAMS"); For example: phonon.tagManager.trigger("detail", "ready", "hello again"); Tried to use your approach in my tag file with hashchanged but have a problem. The first time in req.params has data. If I go back to my topics and select a different topic hashchanged fires but the req.params is undefined. After looking closer at req I see the problem. The first time in req.params has the value and req.action is blank, the second time in req.params is blank and the req.params holds the value. I also like how Riot names the parameters so I updated this! :) If the same error is present, please tell me! self.onHashChanged(function(req1, req2, req3) { console.log('req1 ' + req1); console.log('req2 ' + req2); console.log('req3 ' + req3); }); Thank you for your suggestion. Thanks. Not sure on the usage? I was using in my riot tag this.on('hashchanged',function(req) { .. }) Tried changing to this.on('hashchanged',function(req1, req2) { .. }) req1, req2 come back as undefined? I see what the problem is now. The uri that back uses from mytopics is #!/home which is correct. From topicmessages it uses #!getmytopics1 which is incorrect - the uri used to access the page is actually #!getmytopics/1 - it's dropping the / for some reason. Oh! I think I just fixed this. Please, can you tell me if it is now working? Thank you! :) Looks good - it's working now as it should.
GITHUB_ARCHIVE
wireless at tampabay Jun 2, 2011, 10:22 AM Post #6 of 9 On 06/02/11 08:15, Jan Kundr√°t wrote: Re: Re: okupy, a Django rewrite of www.g.o [In reply to] > On 06/02/11 13:54, Theo Chatzimichos wrote: >> Hey man, relax! :) >> I'm also a translator, I have some ideas/needs that need to implement. >> I was going to ask you (you as in other translators) for additional >> ideas, I just didn't get there yet, i'm still struggling with the LDAP >> backend for the time being. > Theo, I'm not sure how to articulate my point in a better way here. I > would've thought that you would ask your target audience *before* you > got your project approved. > Anyway, I said what I wanted to say, so let's just use this opportunity > to produce something useful. /me crosses fingers. EVERYONE is missing the point, imho. ANY documentation system *should* have a facility where anyone can create content, quickly and not as part of the *glorified* official documentation. Fast moving technologies, such as open-source software would do much to *EXCITE* the user base, if documentations was easy, a little sloppy on form and features, but *CURRENT* on key information. In lieu of this sort of scenario, folks repeatedly handle support, via a variety of means. As the content gets refined and becomes quite reasonable (as usually happens over time) *THEN* it can be put into proper form. The lack of this sort of "quick and easy" approach leads to too many details that make Gentoo Documentation, substandard (not current) at best. At all if you ask most folks, the freshness of current content far outweighs the importance of appearance, for documentation. Hundreds of times I have sent private email to folks of sloppy content notes on how I fixed something, and *EVERYTIME* they are most grateful for the currentness of the information, despite it being just a sloppy (VI) raw-text file. *PRETTY without CONCURRENCY* is mostly useless, imho. Just look at the docs related to building a software raid system for a new gentoo installation, for example. Pathetic! Yet software raid is a fundamental need that needs to converted into a "GENTOO COMMODITY", imho. Over the last 7 years, I've watched hundreds of very smart and reasonable folks come to gentoo, want to update or create good documentation and work *WITH* the gentoo devs. Hard-line attitudes cause them to leave, quickly, in many circumstances. Documentation, is quite often the key issue. Time and again the arcane gymnastics that are employed (to a point of cult worship) take precedence on content enhancement. Until this is fixed (technically and attitudinally) you'll never keep up on documentation or attracting bright folks to contribute to docs. So most technical folks that stay with Gentoo, just fade into the That is what needs fixing *OVERWHELMINGLY* compared to any of these other trite issues and fiefdoms....... If you really want this maze to create documents, then *FIRST* create the docs that folks can follow to create acceptable docs. Keeping this sort of *CLEAR* information from the masses is the equivalent of Fiefdom, or at least that is the appearance that others perceive. Me, I just make my own docs and do not worry about sharing them because the overall attitude of the those that control Gentoo, particular the DOC teams.....imho. I'm very sorry if this sort of email hurts anyone's feelings, but, you really should live where the average user lives for a few days and LISTEN for a bit. Gentoo is no longer a "sole proprietor system" it is a multi-national conglomerate and documentation should be the first training ground for those seeking the "DEV" status, imho.
OPCFW_CODE
Is there an alternative to Excel in the field of budgeting and business intelligence In recent years, the topic of economic planning and analysis is becoming more relevant. But at the same time, it becomes more and more obvious that these functions are not efficiently implemented in expensive and large-scale ERP-systems, in which their presence is initially assumed (this is even said by the letter "Planning" in the abbreviation of such systems). Despite the huge budgets and titanic efforts to implement ERP-systems, economic units of medium and large enterprises both worked and continue to work in spreadsheets, mainly MS Excel. GitHub . There are also links to the documentation, working demo version and other additional resources. The system is distributed under the license of MIT and is open to any proposals for participation in its further development for all interested persons. Before proceeding to the peculiarities of the JetCalc architecture, it should be said that JetCalc is a free version of the system implemented in the jаvascript ecosystem based on the closed system architecture implemented on Microsoft technologies, which since 2012 provides the processes of budgeting, economic analysis and consolidation of management and financial reporting , including for the preparation of consolidated financial statements in accordance with IFRS, in a large metallurgical holding with an annual turnover of more than $ 10 billion. In the JetCalc system, as in Excel, all calculations are performed based on the formulas that the end user develops and tests. In this case, the JetCalc computation system has a number of unique properties that make it easy to modify the data models used and to generate complex consolidated reports in real time. A key feature of the JetCalc data model is the way to create cell formulas. If in Excel formulas are prescribed for each cell, then in JetCalc, the formulas are written for a row or column, and at the cell level, the formulas are generated dynamically by the system in the context of an open document. This approach drastically reduces the time for changing the formulas and completely eliminates the appearance of arithmetic errors. Moreover, individual columns are combined into headings (caps) for certain kinds of documents, which allows you to change the column formulas simultaneously for several documents in one place. Another feature of JetCalc is the availability of a specialized mechanism for summing the values of cells along the lines of the document, which is based on the tree of rows, in which the summation is performed on the child rows for each parent row. Therefore, instead of enumerating the cells in Excel, which must enter as arguments to the SUM formula (A1; A2; ), in JetCalc it is enough to tick off the required sum line on the web interface. Thus any line can be marked, as not entering into the sum, and also as summable with opposite sign (that is subtracted). When you add new rows, unlike Excel, in JetCalc you do not need to change any settings, because in the context of an open document the cell formulas will be automatically re-formed. The third important feature of JetCalc is the collection of information in the context of accounting objects organized in the form of a tree with a number of attributes that allow performing complex calculations on aggregation and filtering by writing simple and understandable formulas. For example, for a division of Metallurgical Enterprises (MET code), which includes Ural Metallurgical Plant JSC (code 201) and Ural Rolling Plant JSC (code 202), to calculate the total by divisional formula of any primary cell in the context of the document will be converted to the form: $ string @ column # 201? + $ string @ column # 202? The same expression can be represented as a formula with the consolidation function, which will be automatically extended when one or more enterprises are added to the MET group: $ string @ column (D: MET)? Also in the core of the JetCalc system is built-in mechanism for the auto-pumping of values in the form of data entry, which allows to significantly reduce the load on the calculation system by storing values calculated in the database in the database as primary values in the database once. In the future, such stored values can be reused by the accounting system when forming various analytical calculations. To configure auto-inflated values, the same formulas are used as for configuring dynamically calculated values. The choice between using dynamic formulas and auto-pumped values is completely determined by the user configuring the domain model, and consists in choosing between the ease of administration and the speed of calculating the document's indicators: Dynamic formulas need only be adjusted once, but as the model becomes more complex and the amount of data increases, the speed of report generation will gradually slow down; auto-pumping formulas allow you to replace the calculated values with primary ones, which dramatically increases the performance of the reporting system, but requires greater discipline in the modification of the document structure, since previously pumped values can be re-pumped after making changes to the document settings. More details about the JetCalc calculation system can be found in at . Another interesting mechanism for increasing the productivity of economists in JetCalc is the checkpoint mechanism, which is a special class of formulas that are also customizable by users, which, if the primary data is correctly entered, should yield a value of zero. If there are nonzero values in the checkpoints, the document can not be blocked from entering data, which means it can not be officially considered timely submitted to a higher-level organization. Such approach allows to parallelize work on revealing of logical errors on hundreds of employees of reporting organizations instead of single employees of the higher organization. And of course, the JetCalc system includes standard features such as printing documents or saving reports to PDF files, displaying the data of individual documents in the form of graphs, creating subject documentation for each document, and much more. Of the promising things that have proved their realizability in practice, one can single out the possibility of distributing once-created models to an unlimited number of subscribers via GitHub. This feature is based on storing the created domain models in the MongoDB database, and the values in PostgreSQL. Therefore, the domain model is a file in the JSON format, which is easy to load into the MongoDB database from any source. In conclusion, I would like to say that at present the project is developing within the framework of the personal initiative of its participants and is ready for application in real "combat" conditions by approximately 90%. But these remaining 10% require thorough debugging of the system to the commercial level in all directions - from testing deployment scripts, improving the functionality of the calculation system, improving the ergonomics of the web interface to writing documentation, creating demos, developing formats for saving models and protocols for exchanging data with external systems and much more. Therefore, all those who are interested in the development of the project are invitedI'm going to participate in the development team, today consisting of two people, working in which it will be possible to find like-minded people, to get unique knowledge of a product that has no analogues in the market, and to realize their most fantastic ideas. It may be interesting
OPCFW_CODE
Generation of Custom LLDP packets The i am trying to create a custom LLPD packet by adding my own organisationally_specific TLV field. ` class organizationally_specific (simple_tlv): tlv_type = lldp.ORGANIZATIONALLY_SPECIFIC_TLV def _init (self, kw): self.oui = '\x00\x00\x00' self.subtype = 0 self.payload = b'' def _parse_data (self, data): (self.oui,self.subtype) = struct.unpack("3sB", data[0:4]) self.payload = data[4:] def _pack_data (self): return struct.pack('!3sB', self.oui, self.subtype) + self.payload My field just has a random UUID and a TIMESTAMP. what code changes should be done to be able to add these custom tlvs plus the <discovery.py > such that the added tlvs can be sent out in the packet out message during the discovery process. def create_packet_out (self, dpid, port_num, port_addr): """ Create an ofp_packet_out containing a discovery packet """ eth = self._create_discovery_packet(dpid, port_num, port_addr, self._ttl) po = of.ofp_packet_out(action = of.ofp_action_output(port=port_num)) po.data = eth.pack() return po.pack() @staticmethod def _create_discovery_packet (dpid, port_num, port_addr, ttl): """ Build discovery packet """ chassis_id = pkt.chassis_id(subtype=pkt.chassis_id.SUB_LOCAL) chassis_id.id = bytes('dpid:' + hex(long(dpid))[2:-1]) # Maybe this should be a MAC. But a MAC of what? Local port, maybe? port_id = pkt.port_id(subtype=pkt.port_id.SUB_PORT, id=str(port_num)) ttl = pkt.ttl(ttl = ttl) sysdesc = pkt.system_description() sysdesc.payload = bytes('dpid:' + hex(long(dpid))[2:-1]) discovery_packet = pkt.lldp() discovery_packet.tlvs.append(chassis_id) discovery_packet.tlvs.append(port_id) discovery_packet.tlvs.append(ttl) discovery_packet.tlvs.append(sysdesc) discovery_packet.tlvs.append(pkt.end_tlv()) eth = pkt.ethernet(type=pkt.ethernet.LLDP_TYPE) eth.src = port_addr eth.dst = pkt.ETHERNET.NDP_MULTICAST eth.payload = discovery_packet return eth ` This looks like it should be an issue and not a PR. Can you submit an issue instead? Yes I created an issue you should be able see it. On Tue, Oct 22, 2019 at 1:02 AM Murphy<EMAIL_ADDRESS>wrote: This looks like it should be an issue and not a PR. Can you submit an issue instead? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/noxrepo/pox/pull/231?email_source=notifications&email_token=ABYIRWTBUZIAD5I2FYS4NGLQPYRHZA5CNFSM4JC4WVFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB36ANI#issuecomment-544727093, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABYIRWWE4G7HVIMLPPAAYCDQPYRHZANCNFSM4JC4WVFA . -- Joseph Katongole UNIX Systems Engineer +256392944768 Hello Murphy I did create an issue you could help me look at it On Tue, Oct 22, 2019 at 12:58 AM Murphy<EMAIL_ADDRESS>wrote: Closed #231 https://github.com/noxrepo/pox/pull/231. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/noxrepo/pox/pull/231?email_source=notifications&email_token=ABYIRWS5CLF4JBOQ2EAJOCLQPYQZDA5CNFSM4JC4WVFKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOULG5JWY#event-2731398363, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABYIRWXM4YCG43V6SFZFSIDQPYQZDANCNFSM4JC4WVFA . -- Joseph Katongole UNIX Systems Engineer +256392944768
GITHUB_ARCHIVE
#![allow(dead_code)] extern crate sdl2; use sdl2::Sdl; pub mod cartridge; mod cpu; mod ppu; mod graphics; mod apu; mod audio; mod controller; mod input; use cartridge::Cartridge; use cpu::CPU; use ppu::PPU; use graphics::EmulatorGraphics; use apu::APU; use audio::EmulatorAudio; use controller::Controller; use input::{ EmulatorEvent, EmulatorInput }; use std::cell::RefCell; use std::rc::Rc; struct EmulatorContext { graphics : EmulatorGraphics, input : EmulatorInput, audio : EmulatorAudio, sdl_context : Sdl, } impl EmulatorContext { pub fn new() -> EmulatorContext { let sdl_context = ::sdl2::init().unwrap(); EmulatorContext { graphics : EmulatorGraphics::new(&sdl_context), input : EmulatorInput::new(&sdl_context), audio : EmulatorAudio::new(&sdl_context), sdl_context : sdl_context, } } } pub fn run_emulator(cart : Cartridge) { let mut emulator = EmulatorContext::new(); let cart = ComponentRc::new(cart); let ppu = ComponentRc::new(PPU::new(cart.new_ref())); let apu = ComponentRc::new(APU::new()); let controller = ComponentRc::new(Controller::new()); let mut cpu = CPU::new( cart.new_ref(), ppu.new_ref(), apu.new_ref(), controller.new_ref()); use std::time::{ SystemTime, Duration }; let start = SystemTime::now(); let mut num_frames : usize = 0; cpu.send_reset(); let frame_len = Duration::new(0, 1_000_000_000u32 / 60); 'running: loop { let frame_start_time = SystemTime::now(); // step until CYCLES_PER_SCANLINE cycles passed // render scanline // when 240 scanlines rendered, // step for 20 scanlines for scanline in 0..240 { cpu.step_for_scanlines(1); ppu.borrow_mut().render_scanline(scanline); } // post render scanline cpu.step_for_scanlines(1); // start vblank ppu.borrow_mut().set_vblank(); if ppu.borrow().nmi_enabled() { cpu.send_nmi(); } // cpu during vblank and pre render scanline cpu.step_for_scanlines(21); ppu.borrow_mut().clear_vblank(); emulator.graphics.update(ppu.borrow().get_pixeldata()); for event in emulator.input.events() { match event { EmulatorEvent::Exit => break 'running, EmulatorEvent::Continue => (), EmulatorEvent::ControllerEvent { action, button } => controller.borrow_mut().update(action, button), } } let frame_duration = frame_start_time.elapsed().unwrap(); match frame_len.checked_sub(frame_duration) { // if the frame took less time than 1/60 of a second, // then delay so we get 60fps Some(duration) => std::thread::sleep(duration), None => (), } num_frames += 1; } let duration = start.elapsed().unwrap(); let freq = num_frames as f64 / (duration.as_secs() as f64 + (duration.subsec_nanos() as f64) / 1_000_000_000f64); println!("ran at an average of {:.2} frames/sec", freq); } trait Memory { fn storeb(&mut self, addr : u16, val : u8); fn loadb(&self, addr : u16) -> u8; } pub struct ComponentRc<T> ( Rc<RefCell<T>> ); impl<T> ComponentRc<T> { fn new(val : T) -> ComponentRc<T> { ComponentRc ( Rc::new(RefCell::new(val)) ) } fn new_ref(&self) -> ComponentRc<T> { ComponentRc ( Rc::clone(&self.0) ) } } impl<T> std::ops::Deref for ComponentRc<T> { type Target = Rc<RefCell<T>>; fn deref(&self) -> &Self::Target { &self.0 } }
STACK_EDU
The Helyx Initiative: Canyon Crest Academy students to host hackathon The Helyx Initiative, started by a group of Canyon Crest Academy students, is holding a hackathon April 10-12 with a $400 grand prize. The initiative started as a way to demonstrate the ways biology, math and computer science are related. It has since grown to multiple chapters around with world with about 125 members in 12 countries. According to two of the students who launched the organization, school curriculums don’t typically convey the correlation among those fields. “In the real research world, these fields are always connected,” said Canyon Crest sophomore Andrew Gao, 16, founder and president of the Helyx Initiative. “You need coding and biology, and you definitely need math and biology too.” The hackathon, which offers individuals and teams a chance to create a software project, will be hosted on the Helyx Discord server. It will begin April 10 at 10 a.m. with an opening ceremony. Workshops and seminars, taking place on the Zoom videoconferencing software, will begin April 11 at 6 p.m. The hackathon projects wil be due on April 12 at 5:30 p.m. followed by judging. Award presentation and closing ceremonies will be at 8:30 p.m. In addition to the grand prize, there will be a $200 prize for second place and a $100 prize for third. There will also be $25 Amazon gift cards awarded to the winners of multiple categories, including best design, most innovative, best presentation and most socially impactful. When he decided to start Helyx, Andrew said he reached out to friends, including Mason Holmes, who now serves as the organization’s chief development officer, and Joanne Lee, its editor in chief. Other student leaders include Sid Udata, chief marketing officer; William Kang, assistant technical officer; and Yash Gupta, chief technical officer. “We thought we’d be a fill-in for the gap,” said Yash, 15, a sophomore at Canyon Crest Academy. “We only saw it as a local goal, but now it’s hitting international. We’re really proud of that.” Yash added that the hackathon helps illuminate the concepts that the Helyx Initiative wants to promote. “I know how crucial it is and how fun they are, and how accessible STEM becomes just because of them,” he said. Long term, Andrew said Helyx’s goals include doing more international outreach, getting more of its research published and creating a program that helps other Helyx chapters host their own hackathons. The Helyx Initiative also has a series of research groups that students can join online. One of them focuses on COVID-19, the disease caused by the novel coronavirus. Students are using the Python coding language to perform data analysis to monitor the spread of the disease and identify noteworthy trends. The group has also raised approximately $500 to buy N95 face masks for hospitals as they battle the novel coronavirus pandemic. “We’re really trying to be very active in the community,” Andrew said. For more information about the Helyx Initiative, visit helyx.science. Registration for the hackathon is free. The sign-up form and other information is available at hackthehelyx.glitch.me. Sign up for the Encinitas Advocate newsletter Top stories from Encinitas every Friday for free. You may occasionally receive promotional content from the Encinitas Advocate.
OPCFW_CODE
I'm trying to use AnyDesk for remote access to my Mac mini, but I can not get it to work without physically logging in first. Shortly after I log in for the first time, it starts to work. When I sign out, I can not connect anymore. This also makes it virtually useless for remote access as the sessions are blocked after some time and the connection is no longer available … Does anyone have an idea? I bought a new hard drive to use as a Time Machine backup. diskutil list shows it as: – /dev/disk4 (external, physical): #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *2.0 TB disk4 1: Windows_NTFS 2.0 TB disk4s1 However, does not appear in the Disk Utility or Finder. How should I format it? The last time I did something similar, I was asked if I wanted to use it as a Time Machine You want to enter old mini-DV and Hi-8 tapes and other analog content for editing. I do not want to guess what type the plug-in has to get it. What is also the possibility to use my fire cable from a Sony PDX 10 (still works great)? And finally … the better video editing and photo programs that need to be installed Hobby work, thank you I'm having trouble getting Mojave to recognize an external Samsung X5 hard drive. When I plug in the drive, it lights up, but the operating system does not seem to see it. I connected the drive to a Windows PC with Thunderbolt 3 connector and Windows has seen the drive in order, so it is not a hardware issue. Samsung has pre-installed encryption software on the drive. I've disabled these over Windows, hoping that the Mac can see the drive, but no dice. I have the latest version of the Samsung software installed, but that does not seem to work. I am at a loss, any thoughts would be appreciated. Stack Exchange network The Stack Exchange network consists of 175 Q & A communities, including Stack Overflow, the largest and most trusted online community where developers can learn, share, and build a career. Visit Stack Exchange The Mojave upgrade from 10.14.5 to 10.14.6 stalled my old VLC installation (which was already fixed by upgrading the VLC installation), and I think it also has my Homebrew-provided build for Meld 3.19. 2.osx6 damaged. I can not find a newer build than mine. What is the other way to merge in macOS? (What is the alternative program?) Before I used sudo pmset -a hibernatemode 25 To try hibernating my MacBook instead of hibernating it pulls the power out of the RAM (s), etc., and puts the laptop into a proper sleep state. Before a few important updates (maybe El Capitan), pmset ... 25 I did not work anymore to put my MacBook to sleep. I am using a MacBook 13 "from 2017. Is hibernation no longer supported on this model? After hearing about the defect in the 15-inch MacBooks from 2017, I have to admit that I'm afraid that they will also find a problem with 13-inch MacBooks, and I want to be as safe as possible. I use many applications whose development and testing status must be maintained over many days. Therefore, daily shutdown of my computer is neither efficient nor a viable option. Yesterday I installed the MacOS update 10.14.6 and now the performance of Java is very bad. This is particularly noticeable when Maven is used to build Java applications. There are projects that I often build to really load the CPU, but now they barely use the CPU and it takes a lot longer to create them. The Activity Monitor shows that the CPU, memory, hard disk, and network are idle most of the time during creation. I've installed a newer Java SDK and upgraded Maven to see if that would improve performance, but it made no difference. I did not install Bootcamp on an iMac 27 "until the end of 2013. Before running Bootcamp Assistant I ran a fully updated Mojave operating system Installed Windows 10 image from Microsoft Download After installing Windows, everything worked fine and all updates I can not start the boot manager or recovery mode, it will go straight to Windows If I use the Bootcamp Windows Control Panel, only Windows will appear in the list If I choose Advanced Boot Options in Windows I choose a USB flash drive or a USB drive, but it's just starting up again in Windows 10. I've tried every shortcut on this page (on a USB keyboard) https://support.apple.com/en-us/HT201255 Even the shortcuts to reset the PRAM do not work. Is it the UEFI? How do I reset this? I do not know why that is so, but 4 days ago I completely cleaned up my MacOS Mojave, which installed Macbook Air Mid 2013 due to a memory issue, and after the cleanup came MacOS Mountain Lion: Now I want to upgrade it to MacOS Mojave again, so I downloaded Mojave Installer: but in the end it does not work the way it was supposed to After opening, it shows Does not react the whole time: Everyone else in this beautiful world has to deal with such problems or has a solution / suggestion, then please be a nice person and help.
OPCFW_CODE
This software prototype creates an explorative sample of core accounts in (optionally language-based) Twitter follow networks. In this journal article we explain the underlying method and draw a map of the German Twittersphere: This talk sums up the article: A short usage demo that was prepared for the ICA conference in 2020 can be found here: (PLEASE NOTE: The language specification is not working as it did for our paper due to changes in the Twitter API. Now it uses the language of the last tweet (or optionally the last 200 tweets with a threshold fraction defined by you to avoid false positives) by a user as determined by Twitter instead of the interface language. This might lead to different results.) A large-scale test of the 'bootstrapping' feature regarding the preservation of k-coreness-ranking of the sampled accounts is presented here: https://youtu.be/sV8Giaj9UwI (video for IC2S2 2020) A test of the sampling method across language communities (Italian and German) can be watched here: https://www.youtube.com/watch?v=dhXRO2d1Eno (video for AoIR 2020) Please feel free to open an issue or comment if you have any questions. Moreover, if you find any bugs, you are invited to report them as an Issue. Before contributing/raising an issue, please read the Contributor Guidelines. - Create a Twitter Developer app - Set up your virtual environment with pipenv (see here) - Have users authorise your app (the more the better - at least one) (see here) - Set up a mysql Database locally or online. - Fill out config.yml according to your requirements (see here) - Fill out the seeds_template with your starting seeds or use the given ones (see here) - Start software, be happy - (Develop the app further - run tests) We recommend installing pipenv (including the installation of pyenv) to create a virtual environment with all the required packages in the respective versions. After installing pipenv, navigate to the project directory and run: This creates a virtual environment and installs the packages specified in the Pipfile. to start a shell in the virtual environment. This app is based on a Twitter Developer app. To use it you have to first create a Twitter app. Once you did that, your Consumer API Key and Secret have to be pasted into a keys.json, for which you can copy empty_keys.json (do not delete or change this file if you want to use the developer tests). You are now ready to have users authorize your app so that it will get more API calls. To do so, run This will open a link to Twitter that requires you (or someone else) to log in with their Twitter account. Once logged in, a 6-digit authorisation key will be shown on the screen. This key has to be entered into the console window where twauth.py is still running. After the code was entered, a new token will be added to the tokens.csv file. For this software to run, the app has to be authorised by at least one Twitter user. After setting up your mysql database, copy config_template.yml to a file named config.yml and enter the database information. Do not change the dbtype argument since at the moment, only mySQL databases are supported. Note that the password field is required (this also means that your database has to be password-protected). If no password is given (even is none is needed for the database), the app will raise an Exception. You can also indicate which Twitter user account details you want to collect. Those will be stored in a database table called user_details. By default, the software has to collect account id, follower count, account creation time and account tweets count at the moment and you have to activate those by uncommenting in the config. If you wish to collect more user details, just enter the mysql type after the colon (":") of the respective user detail in the list. The suggested type is already indicated in the comment in the respective line. Note, however, that collecting huge amounts of data has not been tested with all the user details being collected, so we do not guarantee the code to work with them. Moreover, due to Twitter API changes, some of the user details may become private properties, thus not collectable any more through the API. If you have a mailgun account, you can also add your details at the bottom of the config.yml. If you do so, you will receive an email when the software encounters an error. The algorithm needs seeds (i.e. Twitter Account IDs) to draw randomly from when initialising the walkers or when it reached an impasse. These seeds have to be specified in seeds.csv. One Twitter account ID per line. Feel free to use seeds_template.csv (and rename it to seeds.csv) to replace the existing seeds which are 200 randomly drawn accounts from the TrISMA dataset (Bruns, Moon, Münch & Sadkowsky, 2017) that use German as interface language. Note that the seeds.csv at least have to contain that many account IDs as walkers should run in parallel. We suggest using at least 100 seeds, the more the better (we used 15.000.000). However, since a recent update, the algorithm can gather ('bootstrap') its own seeds and there is no need to give a comprehensive seed list. This changes the quality of the sample (for the worse or the better is subject of ongoing research), however, it makes it a very powerful exploratory tool. PLEASE NOTE: The language specification is not working as it did for our paper due to changes in the Twitter API. Now it uses the language of the last tweet(s) by a user as determined by Twitter instead of the interface language. This might lead to different results from our paper (even though the macrostructures of a certain network should remain very similar). Run (while you are in the pipenv virtual environment) python start.py -n 2 -l de it -lt 0.05 -p 1 -k "keyword1" "keyword2" "keyword3" - -n takes the number of seeds to be drawn from the seed pool, - -l can set the Twitter accounts's last status languages that are of your interest, - -lt defines a fraction of tweets within the last 200 tweets that has to be detected to be in the requested languages (might slow down collection) - -k can be used to only follow paths to seeds who used defined keywords in their last 200 tweets (keywords are interpreted as regexes, ignoring case) - and -p the number of pages to look at when identifying the next node. For explanation of advanced usage and more features (like 'bootstrapping', an approach, reminiscent of snowballing, to grow the seed pool) use python start.py --help which will show a help dialogue with explanations and default values. Please raise an issue if those should not be clear enough. - If the program freezes after saying "Starting x Collectors", it is likely that either your keys.json or your tokens.csv contains wrong information. We work on a solution that is more user-friendly! - If you get an error saying "lookup_users() got an unexpected keyword argument", you likely have the wrong version of tweepy installed. Either update your tweepy package or use pipenv to create a virtual environment and install all the packages you need. - If at some point an error is encountered: There is a -r (restart with latest seeds) option to resume collection after interrupting the crawler with control-c. This is also handy in case you need to reboot your machine. Note that you will still have to define the other parameters as you did when you started the collection the first time. It is possible to import the data into tools like Gephi via a MySQL connector. However, Gephi apparently supports only MySQL 5 at the time of writing. Then you can import the results, in the case of Gephi via the menu item File -> Import Database -> Edge List, using your database credentials and SELECT * FROM nodesas the "Node Query" SELECT * FROM resultas the "Edge Query" if you want to analyse the walked edges only (as done in the German Twittersphere paper) SELECT * FROM dense_resultas the "Edge Query" if you want to analyse all edges between collected accounts (which will be a much denser network) Other tables created by RADICES in the database that might be interesting for analysis are: - result: edge list (columns: source,target) containing the Twitter IDs of walked accounts - friends: cache of collected follow connections, up to p * 5000 connections per walked account (might contain connections to accounts which do not fulfill language or keyword criteria) - user_details: user details cache, as defined in config.ymlof all accounts in result and friends (might contain not deleted data from accounts which do not fulfill language or keyword criteria) Other tables contain only data that is necessary for internal functions. For development purposes. Note that you still need a functional (i.e. filled out) keys.json and tokens indicated in tokens.csv to work with. Moreover, for some tests to run through, some user details json files are needed. They have to be stored in tests/tweet_jsons/ and can be downloaded by running python make_test_tweet_jsons.py -s 1670174994 where -s stands for the seed to use and can be replaced by any Twitter seed of your choice. Note: the name tweet_jsons is misleading, since the json files actually contain information about specific users (friends of the given seed). This will be changed in a later version. Before testing, please re-enter the password of the sparsetwitter mySQL user into the passwords_template.py. Then, rename it into passwords.py. If you would like to make use (and test) mailgun notifications, please also enter the relevant information as well. Some of the tests try to connect to a local mySQL database using the user "sparsetwitter@localhost". For these tests to run properly it is required that a mySQL server actually runs on the device and that a user 'sparsetwitter'@'localhost' with relevant permissions exists. Please refer to the mySQL documentation on how to install mySQL on your system. If your mySQL server is up and running, the following command will create the user 'sparsetwitter' and will give it full permissions (replace "" with a password): CREATE USER 'sparsetwitter'@'localhost' IDENTIFIED BY '<your password>'; GRANT ALL ON *.* TO 'sparsetwitter'@'localhost' WITH GRANT OPTION; For the functional tests, the test FirstUseTest.test_restarts_after_exception will fail if you did not provide (or did not provide valid) Mailgun credentials. Also one unit-test will fail in this case. To run the tests, just type and / or python tests/tests.py -s The -s parameter is for skipping API call-draining tests. Note that even if -s is set, the tests can take very long to run if only few API tokens are given in the tokens.csv. The whole software relies on a sufficiently high number of tokens. We used 15. By submitting a pull request to this repository, you agree to license your contribution under the MIT license (as this project is). The "Logo" above is from https://commons.wikimedia.org/wiki/File:Radishes.svg and licensed as being in the public domain (CC0).
OPCFW_CODE
Defects are not the fault of programmers Sorry this one is late! I've been having a rough time lately, which I think is pretty normal right now for everyone. But I'm doing a bit better now and hope to stay that way for the next month or so. Hope. Anyway, yesterday a bunch of us were trying to explain to someone on Twitter why we don't like Robert Martin, and I brought up his claim "Defects are the fault of programmers. It is programmers who create defects – not languages". I said this was a major flaw in his philosophy. The person asked: Regarding defects, who else is to blame if not the person who introduced them? Which is way too complicated question to answer in a Tweetstorm. It's also not something I really have capacity right now to write a fully researched essay on, so you will get my thoughts as newsletter. Hooray! So on a surface level, that sounds obvious. Defects come from code, programmers write code, therefore defects come from programmers. As a statement of fact, this doesn't "coincidentally" hold. A lot of the nastiest bugs come from components that are all correct in isolation but interact in a dangerous way. This is called a Feature Interaction Bug. You can make a case that it's both people's fault, or neither people's fault, or the manager's fault, or who knows who's fault. Another set of dangerous bugs come from ambiguous requirements. This is where there are multiple ways of interpreting the requirements that are all completely reasonable, words possible even the client doesn't know which one they intended. I call these "coincidental" counterexamples because they don't actually refute the core idea, only the specific claim. Both feature interaction and ambiguous requirement bugs are amenable to upfront design methods like formal specifications. So you can argue that this is the programmer's fault for not writing a spec. It shows that the situation is more nuanced than the simple statement of fact assumes, but it doesn't get to why I'm so against it. No, the bigger issue with "programmers create defects" is an essential issue with the worldview. Its basis in facts is less important than its value as a "lens" we can use to understand defects. If we know where defects come from, we know what we need to do to manage them. To emphasize this: Our worldview on defects shapes our means of reducing defects. We measure the value of a lens by how useful it is. So what does the "defects are the fault of programmers" worldview tell us about our means? It says that the only way to reduce defects is to improve programmer discipline. Either the programmer is sufficiently disciplined and nothing can cause defects, or the programmer is insufficiently disciplined and nothing can prevent defects. Martin himself describes his worldview as: - Too many programmer take sloppy short-cuts under schedule pressure. - Too many other programmers think it’s fine, and provide cover. The obvious solution: - Raise the level of software discipline and professionalism. - Never make excuses for sloppy work. This is why Martin only recommends solutions that the programmer does: - Unit testing (test driven development specifically) - Code review - "Clean Code" - Pair programming This is why Martin calls language features like nullable types "the dark path" and formal specifications a "shiny". None of his solutions are about the structure of the project, the tools the programmer uses, the environment they're in, etc. Notice how little that gives us! Because all defects come from programmers, we can only talk about a very narrow range of things to eliminate defects. Anything else is just a sloppy shortcut, regardless of how well it actually works. Maybe the problem is everyone is overworked and we need to hire another programmer? No, that's just an excuse for sloppy work. What if our microservice architecture makes certain kinds of bugs a lot more likely? It's still you're not having discipline. In the safety science world this is derisively referred to as "human error". Human error is where people stop their investigations. Just scapegoat the human and let the broader system get off scot-free. Don't ask if the equipment was getting out of date, or if the training program is underfunded, or if the official processes were disconnected from the groundlevel reality. When all defects are the fault of the human, then we don't have any tools to address defects besides blaming humans. And that's completely inadequate for our purposes. "Defects are the fault of programmers" may sound good, but it does nothing to help us fix defects.
OPCFW_CODE
kent.hansen at nokia.com Fri Aug 28 03:25:37 PDT 2009 ext Simon Hausmann wrote: > Hi Oliver, > On Wednesday 26 August 2009 11:04:29 pm ext Oliver Hunt wrote: >> A further >> issue is that any API that binds directly to JSC internals >> realistically needs to be developed by the engineers working on the >> core engine as a developing such an API needs enormous care to not >> cause harmful long term problems. > I understand that. We've been bitten by that, too, as we've ported our > existing API to JSC internals, to see how it can be done. The vast majority > was straight forward as we've been careful in the original design to only > mistakes but we've found solutions for them, too, that don't affect the > internals in flexibility or performance. (*pokes some of the guys to elaborate > a bit here*) Here's a rough overview of JSC hacking we've been doing: - Fixing purely ECMA-compliance-related things. These are mostly small discrepancies we've found when running our tests; the associated JSC patches should be straightforward to either accept or reject (we'll be creating separate Bugzilla tasks for these). If they're not fixed, it's not critical for us, as I doubt much real-world JS depends on the behavior anyway (otherwise you'd think the issues would have been fixed - Adding ability to introspect non-enumerable properties of objects from C++. JSObject::getPropertyNames() seems to be made only to support the JS for..in statement; we added an extra parameter so you could ask for non-enumerable properties as well. We need this for debugging and autocompletion. The patch is large because there are a lot of classes that reimplement getPropertyNames(), so all the signatures must be updated... I'm interested in hearing if there are alternatives, e.g. adding a separate virtual function getNonEnumerablePropertyNames() and only implement it in JSObject for a start (and we would then reimplement it in our own Qt "bridge" classes). See also https://bugs.webkit.org/show_bug.cgi?id=19119; looks like the Inspector would also gain from providing this functionality. - Being able to delete a property from C++ even if the property has DontDelete=true. We implemented this by adding a checkDelete argument to JSObject::deleteProperty(). I think it's similar in spirit to the put() methods that have a checkReadOnly argument. The patch is not that large compared to the getPropertyNames() change, because there aren't as many classes reimplementing deleteProperty(). - Storing CallFrame::callee() as a generic JSObject instead of a JSFunction. Any JSObject can be invoked as long as its getCallData() returns something, so we didn't see a good reason for only being able to store JS callees (in the QtScript API we need to offer the callee for native functions as well). - Some more flexibility in dealing with timeouts: Make TimeoutChecker subclassable so we can get a callback at timeout, make it possible to adjust the timeout interval. - Adding column number information to the parser + debugger callback. Not life-critical, but reasonably easy to add to the parser anyway. - Hacking error.sourceURL and error.line to be called error.fileName and error.lineNumber instead. We don't know how many existing QtScript users out there rely on the fileName and lineNumber properties being present on error objects, though; maybe we can make the source-incompatible change JS-wise of using the JSC names. A safer solution would be to just copy sourceURL and line over to fileName and lineNumber in the parts of our code where we have the chance to do that (i.e. not inside JSC itself). - Trying to make our debugger callbacks (which we relay from the JSC debugger callbacks) have precisely the same semantics as in our old engine. We have a similar debugging interface as JSC::Debugger, but obviously "similar" doesn't cut it for this low-level stuff. Geoffrey Garen said in another mail in this thread that the debugger exposed too much internal engine knowledge, and that's also true for the API we provide for QtScript. We'll be creating Bugzilla tasks with patches for the things we hope make sense to get upstream (regardless of whether the QtScript API itself makes it upstream at some point, which I agree is not very reasonable so long as it depends on JSC internals), so we can discuss any implications in more detail. More information about the webkit-dev
OPCFW_CODE
add another check for no_arguments flag Fixed a remaining place where it's needed to check for the flag. There are some other places which I'm not sure about, maybe you can decide: https://github.com/mstange/pdb-addr2line/blob/main/src/type_formatter.rs#L791-L794 https://github.com/mstange/pdb-addr2line/blob/main/src/type_formatter.rs#L810-L812 https://github.com/mstange/pdb-addr2line/blob/main/src/type_formatter.rs#L1123-L1125 Good catch. I think we also need it for the MemberFunction cases in emit_id and emit_function. I would not do it in the other three places you found - those would only show up in template types where one of the types is a pointer to a method, for example. But we need tests for the three new cases, i.e. for the the one in your current patch and for the two MemberFunction cases. Could you write some tests which use the existing test pdb files? To find the right Id indexes / Type indexes, you can apply the following patch and then run cargo run --example pdb-addr2line -- -pCfi -e tests/fixtures/mozglue.pdb 0xabcd with various addresses : diff --git a/src/type_formatter.rs b/src/type_formatter.rs index 3ec1bf2..51d1e13 100644 --- a/src/type_formatter.rs +++ b/src/type_formatter.rs @@ -287,11 +287,13 @@ impl<'cache, 'a, 's> TypeFormatterForModule<'cache, 'a, 's> { } self.maybe_emit_return_type(w, Some(t.return_type), t.attributes)?; self.emit_name_str(w, name)?; + write!(w, "<Arguments for emit_function TypeData::MemberFunction 0x{:x}>", function_type_index.0)?; self.emit_method_args(w, t, true)?; } TypeData::Procedure(t) => { self.maybe_emit_return_type(w, t.return_type, t.attributes)?; self.emit_name_str(w, name)?; + write!(w, "<Arguments for emit_function TypeData::Procedure 0x{:x}>", function_type_index.0)?; write!(w, "(")?; self.emit_type_index(w, t.argument_list)?; write!(w, ")")?; @@ -333,6 +335,7 @@ impl<'cache, 'a, 's> TypeFormatterForModule<'cache, 'a, 's> { self.emit_type_index(w, m.parent)?; write!(w, "::")?; self.emit_name_str(w, &m.name.to_string())?; + write!(w, "<Arguments for emit_id IdData::MemberFunction 0x{:x}>", id_index.0)?; self.emit_method_args(w, t, true)?; } IdData::Function(f) => { @@ -349,6 +352,7 @@ impl<'cache, 'a, 's> TypeFormatterForModule<'cache, 'a, 's> { self.emit_name_str(w, &f.name.to_string())?; + write!(w, "<Arguments for emit_id IdData::Function 0x{:x}>", id_index.0)?; if !self.has_flags(TypeFormatterFlags::NO_ARGUMENTS) { write!(w, "(")?; self.emit_type_index(w, t.argument_list)?; I added some more tests, I'm not sure if that's what you meant by "three new cases". Also I feel very unsure about the helper function get_function_name_at_address that I wrote, it took me some time to win against the borrow checker... Oh I see, it's just one new case. I had missed that emit_method_args already has the check, and that already covers the two MemberFunction cases. Thanks for the tests! Since these tests are just about the TypeFormatter, I think we should not create a context there and just call format_id and format_function directly. I think these would be the corresponding invocations for the functions you picked: format_id(2, IdIndex(0x11c2)) format_type("name", 2, TypeIndex(0x13f4)) format_id(2, IdIndex(0x1169)) format_type("name", 5, TypeIndex(0x225d)) I got you, thanks for the hints. Changes are pushed. Looks good, thanks! I've released 0.10.4 with this fix. I meant to do it earlier but I forgot.
GITHUB_ARCHIVE
Do you plan to obtain the Microsoft 365 Certified: Enterprise Administrator Expert certification? Passing the MS-100: Microsoft 365 Identity and Services exam is the surest way to achieve that dream. By earning this certificate, you can validate that you have the relevant skills and competence in deploying, planning, managing, as well as migrating Microsoft 365. It will also prove your abilities to perform a number of management duties, such as compliance, identities, supporting technologies, and security for an enterprise. In fact, Microsoft MS-100 is not the only test that you must complete to get this expert-level certification. You should also clear the ExamSnap Microsoft Certification MS-100 VCE: Microsoft 365 Mobility and Security exam. It is pretty important to take this fact into account. The target audience for the ExamSnap MS-100 certification test is the professionals who have excellent knowledge and understanding of Microsoft 365 and work in no less than one Microsoft 365 workload. Moreover, the candidates should complete the prerequisite and obtain one of five Microsoft certificates: Modern Desktop Administrator Associate, Security Administrator Associate, Messaging Administrator Associate, Teams Administrator Associate, or Identity and Access Administrator Associate. The first important detail that you are supposed to know about the Microsoft MS-100 exam is that it is 150 minutes long. Within this time, you have to answer 40 to 60 questions. Therefore, it is quite important to improve your skills in time management to complete the test in time. To clear this certification exam, you should achieve the pass mark, which is 700 points on a scale of 1000. The Exam-Collection Microsoft Certification MS-100 VCE will assess your overall competence and expertise in several subject areas. It is better to get familiar with them and prepare for the test in advance. The domains you have to study and master are listed below: - Office 365 Applications and Workloads Planning - Access and Authentication Management - User Roles and Identity Management - Microsoft 365 Services Design and Implementation The registration fee for the applicants from the USA is $165. If you are from another country or region, this amount can be higher or lower. The exam is available in the Japanese and English languages. You can schedule this test via the Microsoft site. You may consider online or offline options depending on your preferences. If you choose the second variant, you can sit for the exam at any testing center offered by Pearson VUE. To take the exam, just search for the closest center. Be sure to schedule this Microsoft test at the most convenient date and time. It is important to do that as early as possible to avoid a rush. The ExamSnap Microsoft MS-100 exam is a very important step that can bring your career as an Administrator to a new level. If you want to pass this test, it is important to make sure that you prepare for it thoroughly. You can enroll for instructor-led courses, use both exam dumps and practice tests, visit forums, and utilize other tools to ace this certification exam.
OPCFW_CODE
Is there a name for the UI that marks key positions in a scroll bar? In Chrome, when you do a text search, you'll see the locations of found instances of the text string shown as yellow lines in the vertical scrollbar. If you run Visual Studio with the Resharper plugin, the location of errors/suggestions/warnings/etc in the viewed source file are shown in a similar way in a vertical strip next to the scrollbar. Is there a name for this UI element? That's quite nice. I didn't know Chrome did that. Similar to how a lot of comparison software shows differences. Regarding the name I don't know. I would say that that the name is still "Scrollbar", because the UI element is actually an ordinary scrollbar. When referring to the additional visual feedback, you may say just that: "A customized scroll bar with additional visual feedback to indicate the [changes / search results / highlights]" FWIW, MSNBC calls them ScrollPins in their HTML http://www.msnbc.msn.com/id/45176460/ns/technology_and_science-space/#.TrkcDegf6ok KDiff3 (http://kdiff3.sourceforge.net/) has something similar, in their case it is not displayed inside the scroll bar but next to it (altough it also acts as a scroll bar, so maybe this is not a reference for good UX). They call it the "Summary Column" (mentioned in their documentation). I don't think that's a good name tough. I was pretty sure Safari had them well before Chrome, I'm sure that's where I first saw them. I believe this design was invented by McCrickard and Catrambone of the Georgia Institute of Technology: McCrickard DS & Catrambone R (1999). Beyond the scrollbar: An evolution and evaluation of alternative navigation techniques. Proceedings of the 1999 IEEE Symposium on Visual Languages, p270-277. It seems very similar ideas were independently developed in Nordic countries: Laakso SA, Laakso K, & Saura AJ (2000). Improved scroll bars. Proceedings of Human Factors in Computing Systems (CHI 00), Bjork S (2001). The ScrollSearcher Technique: Using Scrollbars to Explore Search Results. Proceedings of Interact 2001, Eighth IFIP TC.13 Conference on Human-Computer Interaction. Bork S & Redstrom J (2002). Window Frames as Areas for Information Visualization. Proceedings of NordiCHI, p247-250. In all cases, the authors don’t provide a general term for graphically encoding content information in the scrollbar track, but instead give different names depending on exactly what is encoded. Chrome’s feature, for example, would be called a ScrollSearcher. There are also ChangeIndicator, ReadabilityViewer, Bookmark Scroll, Calendar Scroll, Mural Bar, and Pile Bar. We could call them “Georgian Bars” in honor of their birthplace, at least until someone finds an earlier example than McCrickard and Catrambone. A simply descriptive name would be “encoded content scrollbars.” Awesome sources, reminds me of the old days of writing research papers I suggest then: "Scroll Bar Highlight" საქართველოს ბარები? (-: Scroll Bar Markers? I don't think there's an official name for this feature. I call it 'scrollbar indicator'. Here are a few relevant articles that describe it: Techdreams article Habitually Good article I have found for the Eclipse Editor "Scrollbar markers" or "Scrollbar annotations". In some patterns it's called "Annotated scrollbars". See following link: http://quince.infragistics.com/Patterns/Annotated%20Scrollbar.aspx The scrollbar versions are simply scrollbar markers or markers on the scrollbar - no specific name. The additional strip next to a scrollbar, I would just call a marker margin. I would never go for the same name (scrollbar), to avoid confusion. As far as I know there is no official term for this. Sysscores 'Scrollbar markers' seems nice. Although I think I would call them scroll pins. Eclipse calls it "Overview Ruler", and I've called it an "Overiew margin" (I don't know where I picked it up). There is no standardized name though as this is not a standard UI element by any stretch...
STACK_EXCHANGE
JRules 7 introduces the Decision Validation Server (DVS) which replaces the Rule Scenario Manager (RSM) of previous versions. The main purpose of DVS is to provide a way to test the rules and validate results against expected results. It also allows for simulations which allow comparison of the results of the execution of two rulesets. As a note for RSM users, there is a way to migrate and RSM archive to a scenario file that is compatible with DVS. I haven’t tried this process, but it is available. DVS uses Excel files to feed input data and expected results through the rules. A user can generate an Excel template to fill in with input data and expected results, then run the tests and get a report on tests passing or failing. This approach can work well in some environments, as long as you business users don’t mind entering the test data in the Excel template. One thing is unclear to me and that is the evolution of this Excel file if your BOM evolves. For example, if you add a new field, is it possible to add this new field to the Excel file without regenerating and without breaking things? In any case, I’m pretty sure that with some Excel gymnastics it would be possible to copy and paste contents so re-entering everything is not required but the documentation is not clear on what steps would be required if you do have a filled in Excel template. Should you wish to go that route, there is also an API that can be used to populate the Excel template There is also a tool that is supposed to help auto-populate the Excel file with Data. The sample code would have to be customized to fit the needs of other projects. I tried the DVS for testing on the samples that ILOG provides and things obviously work relatively well. Generate a template based on a BOM, fill in the template with input data and expected results, run the test and voilà, a nice report indicating which tests passed or failed. The main problem I have with using DVS is the limitations it has on generating the expected results template structure. If your output is not a “flat” structure, you may be in for a lot of fun. If your main output entity contains other entities, the scenario might not get generated properly if the number of child entities is not bounded. In the real life projects I played with, the output has multiple levels and has some unbounded elements and the wizard can’t generate the template properly because it attempts to “flatten” it. There might be a workaround by using virtual attributes (which basically means that you end up flattening the structure manually), but I was not going to start doing that for 180 attributes. For those interested, I am reproducing the limitations texts here: From the main readme.html located in the install directory: |Limitations||Comments and workarounds| |Cannot test the content in collections of complex type in Excel scenario file templates.|| The column headings of the Excel scenario file template are defined by the attributes of the BOM class. Each column corresponds to an attribute. Each attribute for which a test is defined in the Expected Results sheet: If the attribute is a collection of simple type ( And from the documentation WebSphere ILOG JRules V7.0 > Rule Studio online help > Testing, validating, and auditing rules > Tasks > Adding or removing columns from the Excel file Besides this limitation, DVS is very promising. Simulations allow you to leverage the DVS but in this case to perform “What if” analysis of the rules. In other words, perform an analysis of how a new ruleset will behave usually based on some historical data. You would identify and create KPIs that will perform the measurements you are looking for and the results would be made available to you after running the simulation. I haven’t tried the simulations at this point, if I get a chance I will in the future and report my findings here. The Decision Warehouse can be used to troubleshoot or to have a look at the execution trace of a ruleset. After enabling some properties on the ruleset and deploying it the RES, you are able to have a look at the trace of the execution of the rules. The Execution details include the “Decision Trace” which basically shows you which rules fired and in what order, the Input and Output parameters which allows you to see the details of what went in and out of the rule execution. I am really interested in this tool because I think it may allow business users to see which rules were fired and some of the technical details without tracing the execution of the rules step by step from Rule Studio. It can help identify the source of the problem, although the real troubleshooting might still have to be done in Rule Studio. Each decision is now tagged with an execution ID or decision ID which can be sued to locate the decision in the decision warehouse. Once you are done troubleshooting, you can turn off the properties so that the execution stop gathering all of that information. Enabling the trace on the execution obviously slows the performance, but it can facilitate some of the troubleshooting that sometimes needs to take place. A nice addition to the troubleshooting tools.
OPCFW_CODE
<?php /** * * Copyright (c) 2009-2011 Laposa Ltd (http://laposa.co.uk) * Licensed under the New BSD License. See the file LICENSE.txt for details. * */ require_once('controllers/component/menu.php'); class Onxshop_Controller_Bo_Component_Node_Type_Menu extends Onxshop_Controller_Component_Menu { /** * main action */ public function mainAction() { $list = $this->getList($publish); $list = $this->filterAndAssignInfo($list); $md_tree = $this->buildTree($list, $node_id); $this->generateSelectMenu($md_tree); return true; } /** * generate SELECT menu */ public function generateSelectMenu($md_tree) { /** * retrieve template_info */ $templates_info = $this->retrieveTemplateInfo(); /** * reorder */ $md_tree = $this->reorder($md_tree); if (!is_array($md_tree)) return false; /** * iterate through each item */ $this->iterateThroughGroups($md_tree); return true; } /** * iterate throught groups */ public function iterateThroughGroups($list) { foreach ($list as $group) { /** * display only what requested, but for content and layout both */ if ( $this->GET['expand_all'] == 1 || $group['name'] == $this->GET['node_group'] || ($group['name'] == 'layout' && $this->GET['node_group'] == 'content') ) { $this->tpl->assign('GROUP', $group); if (is_array($group['children']) && count($group['children']) > 0) { $this->iterateThroughItems($group['children']); } $this->tpl->parse("content.group"); } } } /** * iterate throught items */ public function iterateThroughItems($list) { foreach ($list as $item) { if ($item['selected']) $item['selected'] = "selected='selected'"; else $item['selected'] = ''; $this->tpl->assign('ITEM', $item); $this->tpl->parse("content.group.item"); } } /** * get md array for node directory */ function getList($publish = 1) { require_once('models/common/common_file.php'); $File = new common_file(); //getting list of templates, joing project and onxshop node dir $list = $File->getFlatArrayFromFsJoin("templates/node/"); //remove .html, .php foreach ($list as $k=>$item) { $list[$k]['name'] = preg_replace('/\.html$/', '', $list[$k]['name']); $list[$k]['name'] = preg_replace('/\.php$/', '', $list[$k]['name']); $list[$k]['id'] = preg_replace('/\.html$/', '', $list[$k]['id']); $list[$k]['id'] = preg_replace('/\.php$/', '', $list[$k]['id']); $list[$k]['parent'] = preg_replace('/\.html$/', '', $list[$k]['parent']); $list[$k]['parent'] = preg_replace('/\.php$/', '', $list[$k]['parent']); } return $list; } /** * reorder file list */ public function reorder($md_tree) { //make sure array is sorted //print_r($md_tree); array_multisort($md_tree); //print_r($md_tree); //reorder $temp = array(); $temp[0] = $this->findInMdTree($md_tree, 'content');//content $temp[1] = $this->findInMdTree($md_tree, 'layout');//layout $temp[2] = $this->findInMdTree($md_tree, 'page');//page $temp[3] = $this->findInMdTree($md_tree, 'container');//container $temp[4] = $this->findInMdTree($md_tree, 'site');//site $md_tree = $temp; return $md_tree; } /** * findInMdTree */ public function findInMdTree($md_tree, $query) { foreach ($md_tree as $item) { if ($item['id'] == $query) return $item; } } /** * filter to show only allowed items */ public function filterAndAssignInfo($list) { /** * retrieve template info */ $templates_info = $this->retrieveTemplateInfo(); /** * set selected item */ if (!$this->GET['open']) $selected = $templates_info[$this->GET['node_group']]['default_template']; else $selected = $this->GET['open']; /** * create filtered array */ $filtered_list = array(); foreach ($list as $item) { if (array_key_exists($item['parent'], $templates_info)) { //process items which are assigned in template_info or actual node has that template already selected (ie during a transition) if (array_key_exists($item['name'], $templates_info[$item['parent']]) || $item['name'] == $selected) { //use template info title if available $templates_info_item_title = trim($templates_info[$item['parent']][$item['name']]['title']); if ($templates_info_item_title !== '') $item['title'] = $templates_info_item_title; else $item['title'] = $item['name']; $filtered_list[] = $item; } } else { $filtered_list[] = $item; } } /** * mark selected item */ foreach ($filtered_list as $k=>$item) { if ($item['name'] == $selected && $this->GET['node_group'] == $item['parent']) $filtered_list[$k]['selected'] = true; else $filtered_list[$k]['selected'] = false; } return $filtered_list; } /** * retrieve template_info */ public function retrieveTemplateInfo() { //include always general require_once(ONXSHOP_DIR . "conf/node_type.php"); //for local overwrites if (file_exists(ONXSHOP_PROJECT_DIR . "conf/node_type.php")) require_once(ONXSHOP_PROJECT_DIR . "conf/node_type.php"); return $templates_info; } }
STACK_EDU
<?php class AdminModel{ private $obj_mysql; public function __construct($obj_mysql){ $this->obj_mysql = $obj_mysql; } public function getUsersWithOutRol(){ $sql = "SELECT * FROM usuario u INNER JOIN empleado e ON u.id = e.usuario_id INNER JOIN rol ON e.id_rol = rol.id WHERE e.id_rol = 5"; return $this->obj_mysql->execute($sql); } public function getUsersWithRol(){ $sql = "SELECT * FROM usuario u INNER JOIN empleado e ON u.id = e.usuario_id INNER JOIN rol ON e.id_rol = rol.id WHERE NOT e.id_rol = 5"; return $this->obj_mysql->execute($sql); } public function getUserForId($id_usuario){ $sql = "SELECT * FROM usuario INNER JOIN empleado ON usuario.id = empleado.usuario_id INNER JOIN rol ON empleado.id_rol = rol.id WHERE usuario.id =" . $id_usuario; return $this->obj_mysql->query($sql); } public function userEdit($data){ $usuario_original_sin_editar = $this->getUserForId($data["id_usuario"]); $usuario_editado = $data; if ($usuario_original_sin_editar['id_rol'] !== $usuario_editado['id_rol']) { $usuario_editado = $this->replaceDataUsers($usuario_original_sin_editar,$usuario_editado); $tabla_a_insertar_new_rol = $this->getRolName($usuario_editado['id_rol']); $es_borrado = $this->deleteUser($usuario_original_sin_editar['usuario_id']); $es_insertado = $this->insertUser($usuario_editado,$tabla_a_insertar_new_rol); if ($es_borrado && $es_insertado) { return true; } echo "no se borro boludon ni se inserto "; die(); } echo "lo hiciste mal boludon"; die(); foreach ($data as $key => $value) { $$key = $value; } $sql_empleado = "UPDATE empleado SET legajo = $legajo, dni = $dni, fecha_nacimiento = '$nacimiento', id_rol = '$rol' WHERE usuario_id = $id_usuario"; $sql_usuario = "UPDATE usuario SET email = '$email' WHERE id = $id_usuario"; return $this->obj_mysql->execute($sql_empleado) && $this->obj_mysql->execute($sql_usuario); } private function replaceDataUsers($original_user,$edit_user){ foreach ($original_user as $key => $value) { foreach ($edit_user as $key_edit => $value_edit) { if($key == $key_edit && $value !== $value_edit){ $original_user[$key] = $edit_user[$key_edit]; } } } return $original_user; } public function deleteUser($id_user){ $sql = "DELETE FROM usuario WHERE id = " . $id_user; return $this->obj_mysql->execute($sql); } private function insertUser($user,$newRol){ foreach ($user as $key => $value) { $$key = $value; } $codigo = ($codigo == null)?"NULL":$codigo; $dni = ($dni == null)?"NULL":$dni; $fecha_nacimiento = ($fecha_nacimiento == null)?"NULL":$fecha_nacimiento; $sql_insert_usuario = "INSERT INTO usuario(id,nombre,apellido,usuario,contrasenia,email,estado,codigo) VALUES($usuario_id,'$nombre','$apellido','$usuario','$contrasenia','$email',$estado,$codigo)"; $sql_insert_empleado = "INSERT INTO empleado(legajo,dni,fecha_nacimiento,usuario_id,id_rol) VALUES($legajo,$dni,'$fecha_nacimiento',$usuario_id,$id_rol)"; $sql_insert_new_rol = "INSERT INTO ${newRol}(legajo,id_rol) VALUES($legajo,$id_rol)"; return $this->obj_mysql->execute($sql_insert_usuario) && $this->obj_mysql->execute($sql_insert_empleado) && $this->obj_mysql->execute($sql_insert_new_rol); } public function addRol($array){ foreach ($array as $key => $value) { $$key = $value; } $sql_update_rol = "UPDATE empleado SET id_rol = $idRol WHERE usuario_id = $id_user"; $nombre_de_tabla_a_insertar = $this->getRolName($array["idRol"]); $sql_insert_new_rol= "INSERT INTO ${nombre_de_tabla_a_insertar} (legajo,id_rol) VALUES($legajo,$idRol)"; return $this->obj_mysql->execute($sql_update_rol) && $this->obj_mysql->execute($sql_insert_new_rol); } private function getRolName($num_rol){ if ($num_rol == 1) { return "administrador"; } if ($num_rol == 2) { return "supervisor"; } if ($num_rol == 3) { return "mecanico"; } if ($num_rol == 4) { return "chofer"; } } }
STACK_EDU
# == Schema Information # # Table name: weathers # # id :integer not null, primary key # api_key :string # latitude :float # longitude :float # created_at :datetime not null # updated_at :datetime not null # address :string # units :text # city :string # state :string # class Weather < ActiveRecord::Base require 'geocoder' require 'forecast_io' serialize :units, Hash attr_accessor :address before_validation :get_precise_location, if: :location_empty? before_validation :config validates_presence_of :address, if: :location_empty? validates_presence_of :latitude, :longitude, :api_key, :units DEFAULT_UNITS = {'US Customary Units': 'us'} SUPPORTED_UNITS = {'SI': 'ca', 'US Customary Units': 'us'} def config if units.nil? || units.empty? || !(SUPPORTED_UNITS.keys.include? units) logger.warn "The supplied units: '#{units}' were either nil or not found in this list: #{SUPPORTED_UNITS}. Setting to the default of: #{DEFAULT_UNITS}." self.units = DEFAULT_UNITS end end def location_empty? latitude.nil? || latitude.blank? || longitude.nil? || longitude.blank? end def get_weather #TODO: Add this to configurable options add warning that going below 2 minutes will exceed the default 1000 calls/day limit that forecast.io has Rails.cache.fetch("weather_#{self.id}/forecast", expires_in: 2.minutes, race_condition_ttl: 1.minute) do ForecastIO.api_key = api_key ForecastIO.default_params = {units: units.values.first} ForecastIO.forecast(latitude, longitude) end end def get_city_and_state resolve_city_and_state "#{city}, #{state}" end def as_json(options) json = super(only: [:id]) json[:self_uri] = Rails.application.routes.url_helpers.weather_path(self.id) json end private #################################################### def get_precise_location logger.debug "Getting precise location for address: #{address}" geocoded = Geocoder.search(self.address).first if geocoded.nil? logger.error "Getting geolocation failed on address: #{self.address}" self.errors.add(:base, 'Fetching the precise location failed. Please check that the address is valid.') return end update!(latitude: geocoded.latitude, longitude: geocoded.longitude) end def resolve_city_and_state if city && state return end geo_search = Geocoder.search("#{self.latitude}, #{self.longitude}").first update!(city: geo_search.city, state: geo_search.state) end end
STACK_EDU
This article is originally published at https://www.rstudio.com/blog/ Photo by JD Long We’re excited to announce our 2022 Summer Internship Program. Through the program, students will be able to apply for full-time summer internships within one of the departments of RStudio. The program allows students to do impactful work that helps both RStudio users and the broader community, and ensures that the community of developers and educators is just as diverse as its community of users. Internship projects will give you hands-on experience, exposure to a collaborative culture, and opportunities to sharpen problem-solving and critical-thinking skills. You will acquire professional experience and work with knowledgeable data scientists, software developers, and educators to create and share new tools and ideas. These are paid internships for 12 weeks, and there are two cohorts, starting either May 9th or June 13th, depending on your school schedule. To qualify, you must currently be a student, (broadly construed - if you think you’re a student, you probably are), have some experience writing code, and using Git and GitHub. To demonstrate these skills, your application needs to include a link to a package, Shiny app, data analysis repository, or other software or educational materials. It’s OK if you create something specifically for this application: we just need to know that you’re already familiar with the mechanics of collaborative software development. The postings will remain open until filled, but for a full review, best to get your application in by March 7, 2022. RStudio is a geographically distributed team which means you can be based anywhere in the United States. We hope to expand the program to other countries in the future, but for this year you must legally be able to work in the United States. You will be working 100% remotely and you will meet with your mentor regularly online. Internships this year will be in - Current student over the age of 18 - Legally able to work in the United States - Experience using Git, Github or Bitbucket - An interest in developing tools or educational materials for data science How to Apply You can apply for the position that you’re interested in through our career portal. You are welcome to apply to more than one internship opportunity. See the Internships on our Careers page. To apply you need - A resume - Links to any software projects you’ve worked on (GitHub, BitBucket, GoogleDrive, web pages, or other) Questions about the program? Post questions and look for answers at community.rstudio.com/tags/rstudio-internship. RStudio is committed to being a diverse and inclusive workplace. We encourage applicants of different backgrounds, cultures, genders, experiences, abilities and perspectives to apply. All qualified applicants will receive equal consideration without regard to race, color, national origin, religion, sexual orientation, gender, gender identity, age, or physical disability. However, applicants must legally be able to work in the United States. Please visit source website for post related comments.
OPCFW_CODE
Dos and don'ts for mentors What you should do Help her find a suitable project Taking your mentee ideas as input, help her decide on a project she likes and she can realistically make in the 12 weeks of the program. Suggest her appropriate tutorials We have made a selection of tutorials available on our website, but you should select the ones your mentee should be reading for each task. You can select other tutorials outside of our list that you find better for a given task. If you found a great tutorial that you think should be on the list, please email it to us! Don’t forget the timezone When you are arranging hangouts / skype calls, don’t forget to take into account the timezone of your mentee. Give her a timeline and stick to it You should be able to see the bigger picture for your mentee - and even though you might be off by 1-2 weeks on your initial estimation, at least you will have a roadmap and you will be able to say how far can the project go at some point in time. E.g. 1 week for learning how to use github and doing basic tutorials; 1 week for thinking the basis of the project (e.g. what classes, modules etc.) Teach her what the tutorials don’t Tell her about the importance of details, such as modularity, refactoring, etc. Teach her how to debug Ok, the program is not working - what next? Teach her about compiler errors, runtime errors, and how to handle each of them. If possible, work with her and tell her to do something wrong so she might see the output. Ask questions to help her reach a conclusion Make her explain her decisions (what does she wants to accomplish? how is she going to do that?) and if the solution cannot help her accomplish their goal, then help her understand this by asking questions and letting her come to those conclusions herself. Support her decisions Support her decisions even though you don’t think it’s the best way / efficient. Don’t impose. Good to have Spend more time with her at the beginning of the program This is even more necessary if she has not had any previous programming experience. At the beginning there should be more hangouts / skype calls, whereas by the end you could just communicate by mail. Answer e-mails in good time It would be really important for your mentee if you could answer her e-mails in 24h, so her learning process is not interrupted for too long. What you don't have to do You don’t have to go through tutorials with her She can do this in her own time, at her own pace, and let you know if she has any difficulties by mail. You don’t have to stick to the original plan Thoughts change, and the mentee's view on the project will also change as she learns more and more. We don’t expect her to stick to the original plan presented. However, every change should be discussed and approved by the mentor, whereas the change of the theme of the project has to be discussed and approved by the organizers. What you shouldn't do Don’t work on her project for her She should learn how to program herself, and exercise at writing code. Tell her what she should do if she needs guidance, but let her choose the way she implements this. Don’t spam her with a lot of materials It is better to only give her one resource, maximum 2, for a fixed task, and later on ask her if she has read and understood what was written there. Don’t tell her everything at once It is better to explain concepts as she goes, too much information might frighten her - and she will forget it easier.
OPCFW_CODE
|Fractal Image Compression| |Written by Mike James| |Friday, 08 April 2022| Page 4 of 5 A secondary result that helps is the continuity condition. This says that as you alter the coefficients of the IFS the attractor changes smoothly and without any sudden jumps. This means that one method of performing the compression is to write a program which allows a user to manually fiddle with a set of controls in an effort to transform the original image into a passable likeness of itself under a set of contractive affine transformations. The resulting set of coefficients could then be used as the compressed version of the image. This is a workable method of compressing black and white images. As you add more transformations to the IFS then the accuracy of the representation increases - so you can trade storage for accuracy as in a traditional compression scheme. A practical scheme The manual matching method of image compression does work but it is hardly a practical proposition if you want to compress hundreds of images. Fortunately there is a way to make the process automatic. The whole theory that we have been explaining still works if you consider contractive affine transformations that are restricted to work only on parts of the whole image. The idea is that instead of trying to find an affine transformation that maps the whole image onto part of itself you can try to find an affine transformation that maps one part of the image onto another part of the image. As most images have a high degree of affine redundancy - i.e. one part of the picture is near as makes no matter an affine copy of another - this should work well and perhaps even better than the manual method described earlier. In an ideal case the division of the image into parts would be done to optimise the matching of one part with another but such an intelligent approach would need a human to drive it. A fully automatic approach is only possible using a fixed division of the image into so called "domain" and "range" blocks. The range blocks contain parts of the image that will be mapped to the domain blocks in an attempt to match the original image. It is obvious that the domain blocks have to completely divide up the original image without overlap because we are interested in reconstructing the entire image. Also the range blocks have to overlap because they have to be bigger than the domain blocks - how else could the transformation from range to domain be contractive? You can think of this as trying to find transformations that map big bits of the image into smaller bits of the image - but what transformations? The entire range of contractive affine transformations is a bit too big to handle so it is customary to restrict the attention to a smaller set. For example, you might use the set of transformations which form the group of symmetry operations of the square: These are not contractive affine maps but by adding the condition that the range block has to be shrunk down onto the domain block they can be made to be contractive. Now we have an automatic compression algorithm: You can see that the resulting fractal compressed image is just a list of range block positions and transformations. To reconstruct the image you simply use this information to iterate the entire set of transformations on a starting image. The collage theorem guarantees that the attractor of the set of transformations is close to the original image. How good the compression is depends on how well the transformations manage to map the sections of the image in the range blocks to each domain block. As long as the original image is rich in affine redundancy the chances are that it will work well. If not you either have to increase the possible range of transformations or divide up the image into smaller blocks. That's about it - the method needs to be extended a little to cover grey level and colour images (see grey level compression) but the same basic method still works. The final touch is to use one of the standard data compression methods on the coefficients of the IFS - but this really is standard stuff. Fractal or JPEG compression If you actually try fractal compression out on real images then what you find is that compression ratios of 10:1 or better do not produce significant distortions. Also while the compression time is rather on the long side the decompression time is quite reasonable - after all it only involves iterating with linear transformations. Fractal compression's main competitor is the familiar JPEG standard. This is a compression procedure that makes use of a modified form of Fourier analysis. It also divides up the image into small blocks 8x8 pixels in this case. However rather than try to represent each block as an affine transformation of another (larger) block it uses a set of 2D sine waves. JPEG does produce good compression results but it is claimed that it is inferior to fractal compression methods. One of the reasons is that fractal methods are not scale dependent. You can decompress a fractal image to any resolution you like by choosing the size of the image that you iterate. Of course beyond a certain resolution the extra detail you will see will be artificial but the argument is that because of its fractal nature it will not be too "obvious" and the image will still look natural. A second advantage is that fractal compression doesn't share a defect that is inherent in the JPEG and all Fourier methods. The problem is that where there is a discontinuity in the image, i.e. a step from black to white, the sine functions which are used to approximate it simply cannot do a good job. Well sine functions are all smooth and curvy and a discontinuity isn't and so trying to fit continuous functions to a discontinuity is like backing a looser! As long as you don't push JPEG to too high a compression ratio you will not notice the problem but if you do then what you will see is a blurring of the edge and a pattern that looks like an echo of the change. This is the classical Gibb's overshoot or "ringing" effect and there isn't much you can do about it. Of course fractal compression has no such problem representing a discontinuity. Even so, today Jpeg compression is almost universally used and many of the early claims for the powers of fractal compression made in the 1990s never really came to be. fractal compression takes longer to implement and in most cases the gain in compression ratio over Jpeg isn't significant. Also the claimed "better quality at higher resolution" also isn't really an issue. Today fractal compression is still being worked on but at the moment its a technology in search of a problem. The book Fractal Image Compression: Theory and Application (see side panel ) includes code for encoding and decoding images using fractals. |Last Updated ( Friday, 08 April 2022 )|
OPCFW_CODE
Download Text-to-Speech Master 2.3.6 [Full] Crack These methods won’t even show up in Code Insight if you prefix them with an underscore (although you can still call them if you insist). CNET Editors’ note: The text-to-speech master keygen button opens the iTunes Text-to-speech master keygen Store, where you may continue the download process. You do that by deriving a class from TJavaLocal and implement the interface. In this case, we need to list both the NSSpeechSynthesizerDelegate and NSSpeechSynthesizerDelegateEx interfaces. Text To Speech Master Serial I tried randomly 5 today and actually 3 or 4 don’t even work out of the box. SayVoice is a pure, natural speech-based text to speech software, which can read aloud any input language, including English, Italian, Japanese, Chinese, French, German, Russian and so on. Moreover, if EXIF data is not available you can organize them by creation date or copy to a different location. Or you can choose to support more features which either do nothing on certain platforms or raise some kind of “not supported” exception. Verbose can be used to read aloud any text, then save it as mp3 or wav files for future listening. Pure ,clear and accurate pronunciation, adjustable reading speed and repeatable recitation are all its… Run ./Gradlew runInstallerGui to start an installer GUI to download and install more voices. I won’t rate it so high if I didn’t come across so much crappy software just trying to convert a report so that I can listen to it in the gym. TTSUU (Text to Speech Universal Utility) is a Text to Speech software that converts any text into natural-sounding speech. A running MaryTTS server needs to be restarted before the new voices can be used. Itâs really useful for file management, even allowing you to split and convert documents. Text-to-Speech Master 2.3.5 + Crack Keygen/Serial Text-to-Speech Master is very powerful and interesting program that lets you listen to documents, e-mails or web pages instead of reading on screen or even convert them to audio files! Run ./Gradlew distZip or ./Gradlew distTar to build a distribution package under build/distributions. It lets you listen to text instead of reading on screen. Text to speech master 2.3.5, 130155 records found, first 100 of them are: Lower values slow down the speech and higher values speed it up. However, on iOS 8 and earlier the default of 0.5 is way too fast and we have to lower the value to about 0.1 to make it comparable to the default on iOS 9 and later. The code comes under the Lesser General Public License LGPL version 3 — see LICENSE.Md for details. - Get Free MOOS Project Viewer v2.6.2 Final version & Keygen - Download Steelray Project Analizer v1.0 Portable + Regcode - Free download Speak v1.8.80 Official, Keyfile - Free DocuTrack v3.7 Latest + Serial Number - Free download textHelp ScreenReader v2.2 Official & Keyfile - Free Round Robin Scheduler v18.104.22.1680 Portable + Crack
OPCFW_CODE
Cannot link to anon-running container (Gitlab CI environment) Hi, I'd like to use this docker image inside a Gitlab Ci environment and run it as a service. (One of the python packages in my application needs g++ compiler). Upon startup, I get the following error Running with gitlab-runner 10.0.0-rc.1 (6331f360) on docker-auto-scale (e11ae361) Using Docker executor with image continuumio/anaconda3:4.4.0 ... Starting service frolvlad/alpine-gcc:latest ... Pulling docker image frolvlad/alpine-gcc:latest ... Using docker image frolvlad/alpine-gcc:latest ID=sha256:6177fe49beef0cebf06dfa7d42b7c1438fd2fe3da54b2787de130e9c882c3444 for frolvlad/alpine-gcc service... Waiting for services to be up and running... *** WARNING: Service runner-e11ae361-project-3680280-concurrent-0-frolvlad__alpine-gcc-0 probably didn't start properly. Error response from daemon: Cannot link to a non running container: /runner-e11ae361-project-3680280-concurrent-0-frolvlad__alpine-gcc-0 AS /runner-e11ae361-project-3680280-concurrent-0-frolvlad__alpine-gcc-0-wait-for-service/runner-e11ae361-project-3680280-concurrent-0-frolvlad__alpine-gcc-0 ********* Your docker image is started by the gitlab runner, I don't have much control on that services: - frolvlad/alpine-gcc:latest stages: - test - build test: image: continuumio/anaconda3:4.4.0 stage: test before_script: - python -V - pip -V - ... Any idea how I could fix this issue ? Thanks a lot I am not familiar with the service concept in GitLab, so I'm not sure what you want to do with it, but compiler is not a service, it doesn't sit waiting for tasks. Thus, the container just exists right away if you don't specify any command to run in the container. I close this issue since this has nothing to do with the image itself. Feel free to provide more details if you will have any. @frol Hum ok, I might taking this the wrong way. I thought a service would consist in just exposing a command to the build environment. In my case the Theano library needs the command g++ during runtime. But maybe not. Thanks for the reply The service image can run any application, but the most common use case is to run a database container, eg. mysql. It's easier and faster to use an existing image and run it as an additional container than install mysql every time the project is built. MySQL is a service application, it runs and waits for requests over network. GCC doesn't operate like that. P.S. This image doesn't have C++ compiler (it only has C compiler, and C++ is separated into frolvlad/alpine-gxx image).
GITHUB_ARCHIVE
D-W translation pages (was: For the translators) Thanks for your reply, Felipe :) I'm completely sorry, In January 15th I sent a message about some tips for Debian Women Wiki, but with the "hurry up" I totally forgot it. You hurried up so much, you were late! :D Since you start the HOW TO Page, I believe that it is a good idea to discuss the better place to put these hints (and of course, to discuss it). I wrote this like a "Tips" page and not like a How To, I will be happy to help you and add this content in the D-W Wiki. :) Thankyou, I really appreciate your contributions. :) I based this on the "Suggestions" page in the English area of D-W-Wiki. I didn't complete all the points, but with the idea of the how to I believe we can Here are the points, some of them should be added to HowTo, and some of them, as we talk in the channel, should be added to WikiTips (or something like that). OK, I've had a go at adding some parts to the howto: please check them and edit them if they're not quite right. 1) Try to contact the person listed as primary contact for Added. However, I don't know to where I should link this. Do we have a list of contact people? If not, from where should it link, and who are they? I'll create the page, if nobody else stampedes over my quivering remains to do it. ;) 2) The Wiki is password protected to avoid spammers, please, ask for the password in our maillist or contact people in #debian-women at irc.debian.org 3) When you start a new translation, try to preserve the wiki name of the page, it has two main advantages, first, it allow other translators to make relations between one language and another, some modification are "language free". The second point, it allow some scripts to grab statistics from source pages. Added. However, could we have an example of a language-free modification? It would help. 4) I was thinking about a kind of hide serial number in each page, which allow us to know if there is a modification in the main language and track updated sites. Not added yet, because this is more discussion than howto: I agree that an indexing system would help a lot... 5) If you preserve the wiki names, you also preserve the Wiki Links, to get it in your language try to use the In English we have: And then in Portuguese: 6) Try to add some information in your Profile and also in the "translation in progress" section. Added, but I seem to have the wrong link for Translation in progress, I just put TranslationInProgress, but it's a new link. An index for the wiki, like a flowchart, would be a huge help... 7) I believe that it is a good idea to keep Debian Translation (l10n) Teams up-to-date with Debian Women translation efforts, from time to time, send a message to the list (and even join the team) are good policy. :) 8) English is the "default" language, try to keep it up to date. If you are planning to create a new document, try to do it first in English and then translate it. I added this, but with less emphasis. I hope it's OK. I don't believe we should be telling people they have to communicate in English, but friendly suggestion is a powerful thing. ;) 9) Try to keep the Menu (SideBar) up to date. Also, the first page need special attention. 10) The Conferences section should be keeped in English, we should create a page with "keywords" translation. I'm suggesting this, because the conferences contains, mainly, a list of names ans some simple information (easy to understand?) about time and place. Maybe a "small" php function to do the tricky. Discussion: I agree it would be handy, so how are we going to do it? 11) The Blog area also should be only in english, all languages point to it. I don't agree that blogs should have to be in English, or only in English, though. What do other D-Ws think? 12) Hope It Helps and Enjoy You Enjoy It! :-) Thankyou. I think I've sprained my wiki-finger. :) from Clytie (vi-VN, Vietnamese free-software translation team / nhóm Việt hóa phần mềm tự do) Clytie Siddall--Renmark, in the Riverland of South Australia Ở thành phố Renmark, tại miền sông của Nam Úc
OPCFW_CODE
New blocks delay in my full node I am subscribed to the new blocks on my node: newHeads Fires a notification each time a new header is appended to the chain, including chain reorganizations I have a node that exceeds two on the recommended hardware. The ram is always at 50% and the CPU at 25%, and network at 10%. In the connection parameters, I have indicated that it connects to 1000 nodes (I have been testing values ​​between 30 and 1000) I am measuring the latency from when a new block is created, until that block reaches me. The problem is that sometimes there is a very high latency, I have seen peaks of up to 20 seconds, is it normal? New block: 11706845 546 ms blocksTimeDelta: 3000 New block: 11706846 2454 ms blocksTimeDelta: 3000 New block: 11706847 529 ms blocksTimeDelta: 3000 New block: 11706848 2510 ms blocksTimeDelta: 3000 New block: 11706849 1760 ms blocksTimeDelta: 3000 New block: 11706850 644 ms blocksTimeDelta: 3000 New block: 11706851 1796 ms blocksTimeDelta: 3000 New block: 11706852 1268 ms blocksTimeDelta: 3000 New block: 11706853 529 ms blocksTimeDelta: 3000 New block: 11706854 1502 ms blocksTimeDelta: 3000 //-- Latency start New block: 11706855 1273 ms blocksTimeDelta: 3000 New block: 11706856 17785 ms blocksTimeDelta: 3000 New block: 11706857 15094 ms blocksTimeDelta: 3000 New block: 11706858 12786 ms blocksTimeDelta: 3000 New block: 11706859 10041 ms blocksTimeDelta: 3000 New block: 11706860 7287 ms blocksTimeDelta: 3000 New block: 11706861 4596 ms blocksTimeDelta: 3000 New block: 11706862 1961 ms blocksTimeDelta: 3000 New block: 11706863 1322 ms blocksTimeDelta: 3000 New block: 11706864 1231 ms blocksTimeDelta: 3000 //-- New block: 11706865 817 ms blocksTimeDelta: 3000 New block: 11706866 410 ms blocksTimeDelta: 3000 New block: 11706867 2362 ms blocksTimeDelta: 3000 New block: 11706868 413 ms blocksTimeDelta: 3000 New block: 11706869 429 ms blocksTimeDelta: 3000 New block: 11706870 1646 ms blocksTimeDelta: 3000 New block: 11706871 923 ms blocksTimeDelta: 3000 New block: 11706872 574 ms blocksTimeDelta: 3000 New block: 11706873 1561 ms blocksTimeDelta: 3000 New block: 11706874 1377 ms blocksTimeDelta: 3000 New block: 11706875 607 ms blocksTimeDelta: 3000
STACK_EXCHANGE
The Geolocation API provides a common interface for locating the device, independently of the underlying technology (GPS, Wi-Fi networks identification, triangulation in cellular networks, etc.). The DeviceOrientation Event Specification defines several DOM events that provide information about the physical orientation and motion of a hosting device. Most browsers support this specification, although various interoperability issues have arised. The work on the specification itself has been discontinued when the Geolocation Working Group was closed. It may fall in scope of the Device and Sensors Working Group when the group recharters, with a view to reducing inconsistencies across implementations and finalizing the specification accordingly, while the group develops the more powerful Orientation Sensor specification in parallel. Technologies in progress The Generic Sensor API defines a framework for exposing sensor data to the Web platform in a consistent way. In particular, the specification defines a blueprint for writing specifications of concrete sensors along with an abstract Sensor interface that can be extended to accommodate different sensor types. A number of sensor APIs are being built on top of the Generic Sensor API. The Proximity Sensor specification defines an API to monitor the presence of nearby objects without physical contact. The Ambient Light Sensor specification defines an API to monitor the ambient light level or illuminance of the device's environment. The Battery Status API exposes information about the battery status of the hosting service (however, note the future of this last specification is uncertain due to identified potential privacy-invasive usage of the API). The detection of motion is made possible by a combination of low-level and high-level motion sensor specifications, also built on top of the Generic Sensor API: - the Accelerometer to obtain information about acceleration applied to the device's local three primary axes; - the Gyroscope to monitor the rate of rotation around the device's local three primary axes; - the Magnetometer to measure magnetic field around the device's local three primary axes; - the Orientation Sensor to monitor the device's physical orientation in relation to a stationary 3D Cartesian coordinate system. The Motion Sensors Explainer document is an introduction to low-level and high-level motion sensors, their relationship, inner workings and common use-cases. The Geolocation Sensor is an API for obtaining geolocation reading from the hosting device. The feature set of the Geolocation Sensor is similar to that of the Geolocation API, but it is surfaced through the Generic Sensor API, improves security and privacy, and is extensible. - Geofencing API - The ability to detect when a device enters a given geographical area would enable interactions and notifications based on the user's physical presence in a given place. The Geolocation Working Group started to work on a geofencing API to that effect. This work has been discontinued, partly out of struggles to find a good approach to permission needs that such an API triggers to protect users against privacy issues, and because the API depended on the Service Workers, which was not yet stable at the time. Work on this specification could resume in the future depending on interest from would-be implementers. - Additional sensor specifications - A number of additional sensor specifications have been considered in the past in the Device and Sensors Working Group, for instance to expose changes in ambient humidity (proposal which predates the Generic Sensor API), the Barometer Sensor or the Thermometer Sensor. Work on these proposals could resume in the future depending on interest from would-be implementers.
OPCFW_CODE
Blockchain Technology Simply Explained Blockchain is a digital transparent, public append, only ledger. So it ensures an unobstructed exchange of cryptocurrencies between a large number of members. So the blockchain is nothing more than a digital finance ledger, in which all transactions of every member of the network are documented. So a mechanism behind the blockchain creates consensus between distributed unknown parties. That means, nobody needs to trust each other, you only have to trust the blockchain system. As a result, you are able to transfer money without the need of third parties like banks. This makes your transaction not only faster but also cheaper. Today we will not focus on a specific network but rather concentrate on blockchain in general. Later on, we can build on that knowledge. So for now, we will work with a fictional network With it you can: - Keep coins - Send coins If you want to manage your Coins, first you need a wallet. It works like your bank account. The account holds a singular address. It allows a unique assignment for transactions in the network. Therefore, there is always a recipient and a return address for each transaction. Now the most important component: the actual network. It is decentralized which means that there is no central instance that regulates the data exchange. If you make a bank transfer during online banking, the bank transfers the information from our input device to the bank server. So after checking the transaction, the bank carries out the money exchange. It works completely different in our blockchain network. It is based on a peer-to-peer system, where central servers are omitted. Instead, there is a union of computers that create a decentralized network. Anyone who wants to can connect their computer to the network. They then become a nodal point. So they can carry out tasks such as validating transactions or just transfer coins. Since not every computer provides the same hardware performance, there are various kinds of nodes. They are different in their responsibilities. More information on that later. A ledger is a collection of transactions. So throughout history, everybody used pen and paper ledgers to keep track of the exchange of goods. In modern times, you use ledgers to digitally store your information. Over time, the interest in a decentralized solution grows more and more. The reason for this is the lack of trust in the digital world. Security is also becoming increasingly important. Blockchain enables such an approach using both distributed ownership as well as a distributed physical architecture. So to keep track of everything in our network, someone has to enter all transactions chronologically. Therefore, blockchain serves as a digital version of an open ledger. It is a database for the entire network by integrating all transactions from a fixed period in one block. After a certain amount of time, this block is digitally sealed and attached to the Blockchain. Thus, it provides a chronological listing of changes throughout the entire network. So the latest block always shows the most recent transaction. Blockchain network users submit transactions to the blockchain network through software. So the software sends these transactions to a node within the blockchain network. The selected nodes then place the transaction in a block. Once the block has been validated and published, the transaction is part of the blockchain. A blockchain block consists of different information. So first we have the Block Header. It contains the block number, also known as block height in some blockchain networks. Then the previous block header’s hash value and a hash representation of the block data. The block header also shows the size of the block and a nonce value. For blockchain networks that utilize mining, this is a number that is manipulated by the publishing node to solve the hash puzzle. Second, we have the block data. This contains a list of transactions and ledger events. So, what are those hashes? In short: Exactly what makes Blockchain safe. Hash is a checksum derived from the block’s information. It is comparable to a digital fingerprint. In addition, each block still contains the checksum of the previous block. This creates a direct link between the blocks making it tamper-proof. So, a change to an already documented transaction would cause all successive checksum to be incorrect. A miner is a nodal point in our network. It is a computer or a union of computers. They provide the network with its hardware performance. So they validate incoming transactions and note them in the block. Our network verifies a block after a fixed time interval. All miners compete by solving a mathematical problem. The solution found is proof that the miner has performed his work. The miner, who is the first to solve the puzzle, shares it with the entire network. He also receives a reward for his effort. Then the other miners check again if the block has been processed properly. So that the new block can be attached to the Blockchain. After that, the transaction is considered confirmed. We have already discussed the most important components: - Blockchain Network - Blockchain Ledger - Blockchain Block - Blockchain Miner Now it is time to clarify the connections. The accounts can easily view as detached from the network. You can use them to send coins. Miners verify of all transactions. The Blockchain is the information storage in which everybody enters the transactions. Therefore, each miner has his own local copy of the entire Blockchain. How does a Blockchain work? We explain the interaction of the different components in the following example: So Bob would like to send 10 Coins to Dave. For this, he starts a corresponding send request via a web client. This creates a transaction that communicates with the network. The miners are busy creating a new block. They also check and document the transactions. As a result, a miner then creates a block, which is then added to the Blockchain. In our example, a miner generates a new block. He documents all transactions including Bob’s one. So after a short time, the first miner who creates the block attaches it to the blockchain. After the miner did his work, the transaction is complete. Bob lost 10 Coins while Dave gained 10 coins. Public vs Private Blockchain Blockchain networks can be categorized based on their permission model which determines who can maintain them. So, if everyone can publish a new block, it is public. If only particular users can publish blocks, it is private. In simple terms, a private blockchain network is like a corporate intranet that is controlled, while a permissionless blockchain network is like the public internet, where anyone can participate. Therefore, private blockchain networks are often deployed for a group of organizations and individuals. Public blockchain networks are decentralized ledger platforms open to anyone publishing blocks. They don´t need permission from any authority. So these blockchain platforms are often open source software, freely available to anyone who wishes to download them. Since anyone has the right to publish blocks, this results in the property that anyone can read the blockchain and issue transactions. Therefore, any user that uses a public blockchain network can read and write to the ledger. Since public blockchain networks are open for everyone, malicious users may attempt to publish blocks in a way that subverts the system. To prevent this, public blockchain networks often use a consensus system. This requires users to use resources when publishing a block. So this prevents malicious users from easily destroying the system, because it is usually not financially worthwhile. Private blockchain networks are ones where users have to be authorized by some authority. Since only authorized users are using the blockchain, it is possible to restrict read access and to restrict who can issue transactions. So, private networks may allow anyone to read the blockchain, or they may restrict read access to authorized individuals. They also may allow anyone, to submit transactions or, they may restrict this access only to certain individuals. So, private blockchain networks can have the same traceability, performance, resilience, and capabilities as a public network. They also use consensus models for publishing blocks, but these methods often do not require the expense or maintenance of resources. This is because you need the identity of someone if he wants to join the network. They are also usually faster and you are able to obtain a certain degree of privacy in transactions. Summarized: Blockchain Technology simply explained - Blockchain is a transparent public append-only ledger - It is a mechanism of creating consensus between distributed parties - They don’t need to trust each other but trust the system - Miner validate incoming transactions and note them in the block - So transparency is one of the basic concepts of the Blockchain
OPCFW_CODE
I post a fair bit to Twitter, and I’m always wanting to go back and easily find an article I shared or a thought that I quickly tweeted. I’ve always thought that it would be extremely useful if I could just view all of my tweets based on whatever hashtags I would like to see. For instance, if I wanted to see everything I tweeted about hiking, I’d just have to click on my hiking hashtag link and it would show me everything on that topic that I’ve thrown out into the tweetverse. Viewing My Posts By Hashtag on Twitter While there isn’t a nice listing of all of my hashtags that I’ve entered into tweets, you can view see all of your posts on a topic by searching for the hashtag followed by your account name, so the above example for me would be ‘#hiking drempd’. I’d search for that and it would bring up all of my posts. This kind of works for my needs, but I’d really just like to see a listing of all of my hashtags, and maybe be able to organize them how I want. I could see organizing my tweet history into folders and grouping things together so I can easily find what I want. I guess the original intent of Twitter wasn’t for it to be a personal repository of information, which I think is actually what I’m looking for. The other issue with using Twitter to store my information is that pesky 140 character limit. Especially when I’m trying to get a thought out of my head quickly, I generally don’t want to take the time to condense it down to 140 characters. My Way of Viewing My Posts By Hashtag My solution to this problem is to just build something that does what I want it to do, so I built out Zindrop, which is a tool for saving thoughts and bookmarks much like Twitter, but I can view my tweets by hashtag (I’ll have to think of something else to call posts in my program) and I’m not limited by that character limit. Currently Zindrop just has a full list of all of my hashtags, but I’m already beginning to see the need for some improvements, like the ability to organize the tag list, and maybe hide some hashtags from appearing within it. For now though, it does what I want it to do, which is a place to easily write a thought or bookmark a page and easily find it later. Do You Have a Need of Viewing Posts By Hashtag? If you have the same need as I, feel free to give Zindrop a try. It’s still a pretty new thing, and I’d love to develop it further to meet the needs of however many people find it useful.
OPCFW_CODE
Five is a super-flexible template designed to provide you loads of customizable settings allowing you the freedom to create many different styles & looks for your website layout. Sizes & Values Site Width - control the width of the content area for the site, including any sidebar width. Canvas Offset - determines the margin above and below the canvas, exposing the site background. Canvas Padding - adjust the padding main content area and the sidebars - if present. Also controls the space between the logo and navigation when together. Canvas Border Size - sets the size of the border wrapping around the outside of the canvas. Logo Height (Navigation) - limits the height of the site logo when present within the navigation area. Logo Height (Banner) - limits the height of the site logo when present within the banner area. Navigation Spacing - controls the amount of padding used on the top and bottom of the navigation area. Banner Area Height - set the height of the banner area at the top of the canvas. Page Spacing - controls the amount of spacing used at the top and bottom of pages. Sidebar 1 Width - set the width of sidebar one (when present). Sidebar 2 Width - set the width of sidebar two (when present). Sidebar Text Size - a size relative to the body text, of the text used inside either of the two sidebars. Social Icon Size - set the size of the (non-social-block) social icons. Canvas Setting - controls whether the site canvas stretches out to the edges of the browser window (full) or snaps to the site width instead. Top Navigation Position - positions the main navigation above or below the site banner area. If set to 'None,' the navigation will be shown in sidebar 1 as a simple vertical list. Top Navigation Alignment - controls the text alignment of the navigation links. Banner Content - determines the content that is shown within the banner area. Sidebar Position - sets the display location of the sidebar(s) across all pages. Footer Alignment - controls the text alignment of the footer text. Social Icon Style - sets the template specific (non-social icon block) social icons style. Social Icon Placement - sets the template specific (non-social icon block) social icon position. Page Thumbnail as Banner - enables a page thumbnail to be used in the header area. Stretch Page Thumbnail - stretches the page thumbnail out to the edge of the window when enabled. Site Description in Sidebar - reveals the site description text and site title in sidebar 1. Underline Sidebar H3 - adds an underline to any H3 element present in either sidebar. Disable Navigation Border - removes the border from the bottom of the navigation area. Disable Footer Border - removes the border from the top of the footer area. Hide Delimiter - turns off the '/' delimiter between navigation links and meta data bits. Article Spacing - adjust the space between posts on the blog list view. Blog List Display - choose the type of information shown on the blog list view. Blog Byline - determine the position of the articles author name when showing a blog post. Blog Dateline - determine the position of the articles date on a blog post. Disable Pagination Border - enables a border on the top and bottom of blog pagination. Hide Excerpt Thumbnail - will hide the excerpt thumbnail on blog list page. Hide Icons - will hide all the small icons used in blog meta information. Hide Tags Categories - will hide the tags and categories from view in a blog. Product Background Color - sets the color behind the product image. Product Overlay Color - sets the color of the overlay when product list titles are set to 'overlay.' Products Per Row - determines the number of products shown per line on the product list. Product List Titles - controls the position of the product title on the product list. Product List Alignment - sets the text alignment of the product title on the product list. Product Item Size - select an image ratio for the product photo on the product list. Product Image Auto Crop - determines whether product images fill the image area or fit within. Product Gallery Size - select an image ratio for the product gallery on the product item page. Product Gallery Auto Crop - determines whether product images fill the gallery area or fit within. Show Product Price - shows the price on the product list page when enabled. Show Product Item Nav - shows the 'back to shop' link on the product item page.
OPCFW_CODE
Hello! We are in the early stages of setting up a process to transfer final grades from Canvas into PeopleSoft. I was just looking for input on who has done this and which route they took. What did you use to accomplish this? Any sharing of your experiences of what worked and what didn't work would be greatly appreciated. I just would rather not spin my wheels on something that has been proven not to be a good route to take. Senior Systems Analyst - DBA Enterprise Application Management Northcentral Technical College 1000 W. Campus Drive, Wausau, WI 54401 715.803.1117 | firstname.lastname@example.org We are in the middle of a project to do this right now. I'm not really involved on the development side (or the PeopleSoft side!), but I can give you an overview of the way our process works: First, within Canvas, students are all enrolled in a section within a course that corresponds to a section in PeopleSoft. This allows us to do cross-listings and course merges in Canvas while preserving the section identification to pass grades back to PeopleSoft. The way the workflow goes is that on the grade roster for a course section in PeopleSoft, instructors have a "Retrieve grades from Canvas" button. If they click that, there's a call made to the Canvas Enrollments API ( Enrollments - Canvas LMS REST API Documentation ) to retrieve all the enrollments for the section. The enrollment record includes several different grades, including "current" and "final" grades, as well as the new "override" grade. I think we're using "current grade", but I'm not positive. Anyway, that grades for all the students are retrieved from Canvas and displayed as a percentage (as it comes from Canvas), and is populated into a column for the instructor to see. If the instructor has set up a grading schema in Canvas to assign a letter grade, then we fill in a grade column with that letter grade, and that is what's set to be passed to PeopleSoft. If they did not set up a schema in Canvas, then the grade column is left blank for them to select a letter based on the percentage that is pulled over from Canvas. Once they are satisfied with the grades shown in the grade column, they can post the grades to PeopleSoft. Instructors can retrieve grades from Canvas as many times as they want, and post them to PS as many times as they want during grading period. A log is kept of all the grades that are posted through that form, so instructors (and admins) can review any changes that have been made. There is some error checking, of course. At the end of the page, there is a warning about students who are listed in the PS roster that do not appear in Canvas, and vice versa. Also, if there are restrictions on the grade letter values that can be entered in PS for that section, those restrictions are checked before the grade from Canvas is displayed. Does that help? Hello there, gindt... I have been reviewing older questions here in the Canvas Community, and I stumbled upon your question. I thought that I would check in with you because we have not heard back from you since you first posted your question on February 28, 2019. It looks like @mzimmerman was able to respond to you later that afternoon. Have you had an opportunity to review Michael's feedback that he provided to you? If so, did it help to answer your question? Or, are you still looking for some help with this topic? If you feel that Michael's response has helped to answer your question, please feel free to click the "Mark Correct" button next to his response. However, if you are still looking for some help with your original question, please come back to this topic and post a message below letting us know how we might be able to help you out. For the time being, I am going to mark your question as "Assumed Answered" because we have not heard back from you and because there hasn't been any new activity in this topic for over five months. However, that will not prevent you or others from posting additional questions and/or comments below that are related to this topic. I hope that's alright with you, Matt. Looking forward to hearing back from you soon. Has your university completed this project? If so, do you have any instructions, documentation or code you would be willing to share with us so can try to do the same? Senior Manager, Learning Management Systems Yes, we've finished the project, and the instructors are pretty happy with it. Our biggest challenge, I think, was trying to be sure that we were selecting the best "total" column from the multiple ones available that best reflected what instructors were expecting to see. The other piece was getting instructors used to using grading schemas, but once they did, it helped simplify things for them quite a bit. We also decided not to enable the manual override grade in Canvas. It wouldn't be that much trouble to add the logic to handle it in the extract process, but the implementation within the grade book itself was too confusing. Instructors have the option manually override any grades before final submission in PeopleSoft anyway. A lot of the coding was done on the Peoplesoft side, since we implemented as a pull in Peoplesoft from Canvas. I'll see if there's any specific code that we can share, but I can certainly share the kind of documentation that we give to faculty if is would help:https://www.unomaha.edu/registrar/_documents/how-to-import-grades-from-canvas.pdf
OPCFW_CODE
Name (First LAST): Andrew KING Email: Andrew.king (at) ens.fr Position: Post-doc at LSP LSP team: Audition Office: 29 rue d'Ulm, 2nd floor, room 202 I grew up in England near to Sheffield before studying a bachelor’s degree in psychology at the University of Hull (2006 – 2009). I then moved to Manchester, where I completed a master’s degree, by research, in acoustics at the University of Salford (2009 – 2011). After this, I completed a PhD studentship at the University of Manchester in the field of audiology (2011 – 2015). At the end of 2015, I moved to Paris to begin my post-doctoral position at l’école normale supérieure. I am interested in how the processing of temporal properties of sounds might benefit hearing in challenging environments, such as following conversation in a noisy background. Currently I am studying how modulations in sound amplitude or frequency are detected, particularly in the presence of other modulations. My colleagues and I are comparing psychophysical thresholds of modulation detection, collected from people of various ages and levels hearing sensitivity, against thresholds estimated by computer models of auditory processing. During my PhD at the University of Manchester in the UK, I studied people’s sensitivity to the temporal cues in a sound that tell a listener which side of them the sound comes from. I studied how sensitivity to these cues differed for older and younger people, with and without hearing loss, and how this sensitivity might dictate how well a person understands speech in noisy backgrounds. King, A., Hopkins, K., Foley, C., & Plack, C. J. (in press) Differential Group Delay of the Frequency Following Response Measured Vertically and Horizontally. Journal of the Association for Research in Otolaryngology. King, A., Hopkins, K., & Plack, C. J. (2014). The effects of age and hearing loss on interaural phase difference discrimination. The Journal of the Acoustical Society of America, 135(1), 342–351. DOI: 10.1121/1.4838995 King, A., Hopkins, K., & Plack, C. J. (2013). Differences in short-term training for interaural phase difference discrimination between two different forced-choice paradigms (L). The Journal of the Acoustical Society of America, 134(4), 2635–2638. DOI: 10.1121/1.4819116. Hopkins, K., King, A. and Moore, B. C. J. (2012). The effect of compression speed on intelligibility: Simulated hearing-aid processing with and without original temporal fine structure information. The Journal of the Acoustical Society of America, 132(3), 1592–1601. DOI:10.1121/1.4742719.
OPCFW_CODE
AI for more intelligent drug discovery – Physics World - Medical Physics Web |Medical Physics Web 04 May 2021 at 12:27| Leonard Wossnig , chief executive of quantum drug-discovery company Rahko , describes the powerful capabilities of artificial intelligence and quantum computing when combined Quantum technology promises to revolutionize many aspects of life. However, we will need to employ many supporting technologies to realize its full potential, with AI being particularly important among these. In the field of drug discovery, we expect quantum computing to help tackle currently intractable problems that stand in the way of discovering better and safer drugs for more diseases. However, we will only be able to harness the powerful capabilities of quantum computing for drug discovery if we can embed them into an AI-based pipeline. In drug discovery, we follow a well-established path to find the best candidates for a new drug. The first step is computational modelling to select drug candidates with a range of target properties. We then manufacture the most promising few candidates. The third step involves experimentally testing candidates to determine if they indeed have the target properties. These steps are then repeated until a candidate is confirmed to have the target properties. While effective, this process is extremely slow and expensive. Indeed, it takes an average of $2.6bn and 10 years to bring a single drug to market, largely because this process needs to be repeated so often due to our inability to identify high-quality drug candidates that have the necessary properties in the first step of this process. So why do the computational methods in our modelling so often fail to produce candidates that can pass successfully through the subsequent steps? The success and failure of such modelling is based on two key criteria: prediction accuracy and screening speed. When it comes to the former, we need to predict the properties of each drug candidate with a certain degree of accuracy, to understand how each candidate will behave in the human body. Today’s computational chemistry methods model how a drug will interact with a target in the body by calculating those interactions classically when in fact drug candidates are governed by the laws of quantum mechanics. What that means is that the predictions of computational chemistry methods are often highly inaccurate. This is one area where quantum computers hold great promise, as they will allow us to model the interactions of the candidates with the targets in the human body using quantum mechanical calculations, which are extremely accurate. The other critical counterpart to prediction accuracy is the speed at which we are able to screen drug candidates. At the first step of the drug-discovery process, computational methods need to screen many candidates. In an ideal world we would be able to screen billions of candidates very quickly, modelling each one in a few milliseconds, thereby assessing a vast volume of candidates in a matter of days. Unfortunately, no quantum computer or classical computational chemistry method that calculates interactions will ever be fast enough to achieve such speed. This is where machine learning and AI become extremely important and valuable. With AI, we can take a few candidates, evaluate their properties selectively, then train a model to predict the properties of all remaining candidates, allowing us to screen large volumes of candidates at high speed. We do not know of any obvious disadvantages in using AI in combination with quantum computing, when it comes to the discovery and development of new drugs. However, to combine the two technologies in practice is a complicated endeavour requiring diverse, multidisciplinary teams of scientists working together to be able to neatly integrate the two technologies. While the four-step process described above seems straightforward, there are numerous additional complexities that need to be factored in, such as those inherent in collecting and working with experimental data. AI offers further benefits here, enabling the collection of higher quality data, which in turn enables the training of other AI models with smaller amounts of data. Automation with AI can allow cheaper and more reliable data collection. At Rahko, we work at the intersection of AI, quantum computing and computational chemistry . We are building a quantum drug-discovery pipeline to overcome the key challenges described above. Unlike standard deep learning approaches, the AI models we are developing are highly specialized for discovery. Deep learning methods, and more generally neural networks, have been immensely successful in areas such as image recognition, where data are abundant. However, due to the nature of the problem in drug discovery, we deal with very little data, with sourcing more data being very expensive or simply impossible to obtain. In scenarios such as this where we have little data, neural networks absolutely fail to make good predictions as there are not enough data to train them. This is where our team at Rahko excels. We embed the laws of quantum mechanics into our quantum machine-learning methods, for example, in the way we represent data. This way, we are able to rely on minimal amounts of data to make good predictions that also generalize to many other candidates in the screening process. This is particularly important when combining AI with quantum computing as quantum computers will be expensive to run for larger systems, and we will therefore need to be selective when running these high-precision calculations. Embedding quantum-mechanical calculations into our AI-based framework allows us to maximize the utility of each individual calculation we perform on a quantum computer. The synergies between quantum computers and AI will have the most obvious benefit to the discovery of drugs and other materials. Here, it is crucial to combine highly precise but still expensive quantum-mechanical calculations with AI, in order to be able to screen through billions of possible candidates to identify the most promising. Chemical simulation is also widely believed to be among the very first areas in which quantum computing will have a powerful impact, once the machines reach a sufficient scale. Another area of significant potential is the correction of errors in quantum computers, which may allow us to accelerate the broad availability of quantum computers of sufficient scale for useful, practical applications. AI for quantum error correction and mitigation has been demonstrated to correct and learn a range of errors in current quantum computers. Our team is actively working in this area in partnership with industrial and quantum-computing hardware companies to test and understand the capabilities of AI for error correction ( arXiv:1912.10063 ).
OPCFW_CODE
- Easy-to-use web-interface with point-and-click functionality. No command-line needed! - L2GRE origination and termination - NetStack features built-in - PacketStack features built-in with 40 Gbps capacity - Max 48 ports of 1/10GE The Evolving Network Doorbells, thermostats, TVs, washing machines, cars, watches and the list goes on - the proliferation of the Internet of Things (IoT) is driving data production and usage, as well as application sharing by users at the "digital edge." To provide better user experience and limit exposure, the modern network is evolving to rely heavily on edge computing - the use of microscale, remote site or branch data centers that are decentralized and closer to the user. These dispersed sites bring many benefits, but along with that comes the need for visibility at each site - solutions that either operate as turnkey solutions within locations that have monitoring tools or for sites where data is aggregated and sent to centralized data centers for security and monitoring. VERSATILE. VISION EDGE. Ixia's Vision E10S, part of the Vision portfolio of network packet brokers, has the right features to handle the visibility requirements of a single rack, branch office or remote site. Vision Edge 10S - Is cost-effective and can integrate seamlessly with into your existing environment - ideal for scale - Can be deployed rapidly and modified quickly with Ixia’s easy-to-use web-interface, which makes configuration and re-configuration simple and hitless - Is ideal for expansion as it is part of the Vision portfolio of network packet brokers - Has both standard NPB features including load-balancing, aggregation, replication, etc. as well as advanced features such as deduplication, packet trimming, header stripping, GRE tunneling, making it versatile enough to serve as a single site or cross-site solution Designed for the branch office and remote sites The Vision Edge network packet brokers can ensure the right data gets to the right tools in multiple data center configurations. - As a turnkey solution for single site deployment with network security and monitoring tools - In a distributed campus network using direct connect to connect with a data center - In data centers that aggregate and send data across LAN or WAN to central NON/SOC for real-time visibility operations - In a leaf-spine architecture, aggregating across multiple racks in a smaller or microscale data center - In a hybrid monitoring environment with Ixia's CloudLens platform - visibility for public, private and hybrid clouds Vision Edge network packet brokers have Ixia's robust baseline NetStack capabilities including aggregation, replication, filtering and load balancing - all with three stages of filtering and Ixia's patented dynamic filter compiler. Vision Edge 10S also has PacketStack capabilities, including L2GRE tunnel origination and termination. Click on each feature stack to learn more. Robust filtering, aggregation, replication, and more - Three stages of filtering (ingress, dynamic, and egress) - Dynamic filter compiler - Priority based filtering - Source port labeling (VLAN tagging & untagging) - Aggregation & replication - Load balancing - Double your Ports- DyP (Simplex) Intelligent packet filtering, manipulation, and transport - Header (protocol) stripping - Packet trimming - Data masking - GRE tunneling - Burst protection (deep packet buffering) AppStack capabilities are not available for this network packet broker or this platform at this time — visit the page to learn more. Context-aware, signature-based application layer filtering - Application identification - Geolocation & tagging - Optional RegEx filtering - IxFlow (NetFlow + metadata) - Packet capture - ATI subscription - Real-time dashboard MobileStack capabilities are not available for this network packet broker or this platform at this time — visit the page to learn more. Visibility intelligence tailored for the mobile carrier evolved packet core - GTP & SIP correlation - GTP & SIP load balancing - Subscriber filtering - Subscriber sampling - Subscriber whitelisting - Packet core filtering TradeStack capabilities are not available for this network packet broker or this platform at this time — visit the page to learn more. Marked data feed monitoring - Gap detection - Feed and channel health - High-resolution traffic statistics - Microburst detection - Subscription for feed updates - Simplified feed management
OPCFW_CODE
I apologize for the somewhat clickbait title, but I just had to create the meme above after seeing Elon Musk’s tweet: I thought that was hilarious…anyways, back to your regular programming. When you finish a PhD in machine learning, they take you to a special room and explain that great data is way more important than all the fancy math and algorithms you just learned At least, that is how I imagine it. I don’t have a Ph.D., so I couldn’t say for sure, but it seems likely. Data » Algorithms That is it — the secret to building great AI systems — have great data. I could end the article here, but let’s take a look at some supporting facts: First, accuracy improvements from growing data set size and scaling computation are empirically predictable. This was part of the conclusion of the paper Deep Learning Scaling is Predictable, Empirically from a group at Baidu. And here is one of the graphs from their results: The above graphs are empirically showing that “deep learning model accuracy improves as a power-law as we grow training sets for state-of-the-art (SOTA) model architectures.” Basically, you could use the power law to predict how much better your model would perform given an increase in training data. Now, stop and think about that for a minute. An almost guaranteed way to improve your deep learning model is to gather more data. It is very hard to say the same for the endless weeks of model tweaking and attempts at inventing new architectures. Google saw this as well in their 2009 paper The Unreasonable Effectiveness of Data. Second, GitHub is full of open-source state-of-the-art algorithms, but open data are much rarer. Want the code to run that latest state of the art deep learning model? You can probably find some version of it on GitHub. Want some data on Google’s search result performance or Netflix viewership? Goodluck. Companies are aware that data advantages are almost always superior to algorithmic advantages. For example, Deep Mind was acquired by Google for an estimated $500+ million. This was a purchase of amazing talent and algorithms. Two years earlier Facebook acquired Instagram for $1 billion. Why is a photo-sharing app worth up to twice as much as Deep Mind? In my opinion, one of the largest reasons is data. As quoted in this Forbes article Facebook’s databases need this info to optimize the media it will bring to you. This data is WORTH S***LOADS! Imagine you’re a ski resort and want to reach skiiers, Instagram will give them a new way to do that, all while being far more targeted than Facebook otherwise could be. If you were to sit down and create a list of “great” AI companies, you would almost surely discover that all of them are also great data companies. Building almost magical AI is a lot easier if you can start with vast amounts of data. Lastly, Google wrote a follow-up paper in 2017 to their work on the Unreasonable Effectiveness of Data titled Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. Here is a chart from the paper: Source: Revisiting Unreasonable Effectiveness of Data in Deep Learning Era Which, among other things, shows that performance increases logarithmically based on the volume of training data. There is also an amazing quote at the conclusion of the paper: While a tremendous amount of time is spent on engineering and parameter sweeps; little to no time has been spent collectively on data. Our paper is an attempt to put the focus back on the data. Put the Focus Back on the Data How many companies today are talking about building a data science team? I get it — it is sexy to hire that incredibly smart Ph.D. student from MIT. And I am not saying that is a bad thing. But. How many of these same companies are talking about investing in building larger and better datasets? Too often, I see companies treating data as a fixed asset or they are extremely skeptical in data investments. Go ahead, ask for $100k to create your own, proprietary dataset which would increase your scale of data 10–1,000x. Let me know how that conversation goes (that same executive, though, would probably be a lot more excited about hiring another data scientist). I hope I have convinced you to think more strategically about data and its value in your AI efforts. I challenge you to sit down today and think about how you can get more and better data for whatever problem you are trying to solve. It might be something as simple as turning on Google Analytics for your start-up eCommerce shop. The data science unicorn you hire in the future will thank you. I read a lot about data science and leadership. I thought it might be helpful for others to send a weekly email with my top 3 favorite articles of the week. If you are interested, sign up below!
OPCFW_CODE
Multi Threading with VB.NET I've written a hot folder utility which works great when one file at a time is dropped. When a file is dropped into the hot folder, I add the filename to an array and then use a timer event to see if the array is empty and if it isn't, take the 1st value and start processing. If I drop 2 files the first file is processed before the 2nd file is processed. I've been researching Background workers and struggling to implement. Any insight on how to get each timer event to trigger in a new thread easily? Wow, this is a broad topic. You need to show what you've tried and ask a specific question. Also, read this: http://stackoverflow.com/help/how-to-ask Why does the order in which the files are processed matter? What defines "the order"? If you drop them at the same time (in a single copy action) what inherently makes one file "first"? Are you using FileSystemWatcher or is this a roll-your-own version? Did you go over the msdn article? https://msdn.microsoft.com/en-us/library/ywkkz4s1(v=vs.140).aspx Can you show what you have been doing? The way you probably should do this is to create a class that contains a BGW, timer, code for processing the files, and property for the filename. For each file, create an instance of the class, set the filename, and the class then creates the BGW and processes the file. The order of the files being processed doesn't matter. I am using FileSystemWatcher using the OnCreated event I have read the MSDN article but not must be slow after the holiday this weekend because its not making sense to me. I'll reread the MSDN article after some coffee in a bit. Put the FSW inside a collection class with a FileItem class. When the FSW fires, add the item to the collection. Add a timer (1) which goes off every 1.5-2.5 secs. Process the files. Garnish with events to report changes (queued, complete, error) to the form (?). Each file may need a minimum delay or min 1 timer cycle before processing to be sure the OS is done creating. No BGW really needed - the FSW runs on its own thread already. Plutonix - Great insight, that is what I was hoping. I enable my FSW with one button, initialize the path, buffer size, etc. When an event (OnCreated) fires, I add the full path to an array and then I have a time that fires ever 2 seconds, takes the files from the array and process the file. Can you provide some sample code on how to do what your explaining? Thanks. When more than one person comments, you need to use @username to be sure the person gets pinged. Yes, but please fix your question - it is very broad (and 1 vote from being closed); as is, the requirement for processing them in order seems specious.
STACK_EXCHANGE
Viewers: in countries Watching now: Find out how to create compelling architectural designs using the modeling tools in Autodesk Revit software. In this course, author Paul F. Aubin demonstrates the entire building information modeling (BIM) workflow, from creating the design model to publishing a completed project. The course also covers navigating the Revit interface; modeling basic building features such as walls, doors, and windows; working with sketch-based components such as roofs and stairs; annotating designs with dimensions and callouts; and plotting and exporting your drawings. Sometimes your project will be comprised of more than one Revit file. There's a variety of reasons why you might do this, but different members of the team might be working in different building files or perhaps they are working under different disciplines like structural and mechanical. Whatever the reason might be, we can link two Revit files to one another so that if changes occur to one of the Revit files we can reload and see those changes in the file that it's linked into. So I'm in a file here called Office Building and it's just a simple office building structure. I'm just going to link in a small outbuilding. I do this by going to be Insert tab and I'm going to click on the Link Revit button. Now this will bring up the familiar browse window and I can choose any of the files that I have in here, and I want to pick this file called Shed, just a small shed building that we're going to bring in here. And if you followed along in the movies on linking CAD files you'll see that this dialog is much simpler. It doesn't have all of the settings down here at the bottom that the Link CAD file had, and the reason for that is we're linking Revit to Revit now. Revit is familiar with how to interpret the geometry in another Revit file. So it doesn't need to ask us a lot of questions about how it should do that. The only real question it needs to know is, where do we want to put it? Now we've got a lot of the same options. We can put it Center to Center, Origin to Origin, or we can bring it in manually. Usually, what you want to do is just accept the default Center to Center, click Open and see where it comes in, and then you can just simply move the file to the correct position. So naturally we don't want our shed to be sitting right on top of our building. So maybe it's out over here somewhere off to the side on the site. So I'm just simply going to select it. And then here on the Modify ribbon I'll click the Move tool, and I'll just pick a base point and I'll move it over here. Now I'm doing that a little bit imprecisely right now, but of course I could type in exact coordinates and we discussed that in some of the previous movies. So I'll leave that to you if you want to move it by an exact amount. Now at this point, if I go to my 3D view, I'm going to click on my default 3D view icon here, the little birdhouse icon, you'll see that I've got my two buildings seated next to one another. Now over on the Project Browser if I scroll down at the bottom, Revit links actually appear right here on the Project Browser. So if I expand that out you'll see that the shed building is shown right here. It's got a small blue arrow next to it. That blue arrow is telling me that it's currently loaded and of course we can see that on screen, because we can actually see the building. If I wanted to open the shed and do some work on it, there is a limitation with linked Revit files where you can't have both your host building, office building in this case, and your nested link file, your shed in this case, you can't have both of them open at the same time in the same session of Revit. They can be opened in two different Revits. So if we have two users working on separate computers, each one can be working on a different building, but you can't have both buildings open on the same person's computer at the same time. So if I wanted to open up shed, and I click on it, that will generate an error message. It'll say sorry you can't do that, because it's currently loaded. Do you want to unload the link? Now if I say yes here it'll actually remove that link right there and it's going to warn me that I can't undo that and I'll say that's fine. It opens up the shed and what we'll see here if we switch windows back to the office building, first of all you see the shed is missing and if I scroll down you can see a big red X next to it. So it's showing me that the shed has now been unloaded. It didn't remove it, it didn't delete it, but it's just not currently loaded. Now I'm going to switch over here the shed and I'm just going to make some change. So it's got two windows on this side and it's got this little entry patio over here, maybe I'll select one of these windows and copy it and add an extra window, select the door here, maybe I want something a little larger. When I open up the list, I only have the default door in here so that just simply means I need to go to the Architecture tab, click on the Door tool, load a family, we've done this in a previous movie, and I'll just bring in a double flush door like so. Instead of actually placing the door I'll press Escape and I'll select this existing door and now you can see that I have the double door available there and I'll just choose the size. So we've made those two changes. Both should be pretty noticeable when we reload this file back into the other project. I'm going to go to the R here, go down to Close. It's going to ask me if I want to save the changes to the shed, I'm going to say Yes, and then back here in order to see those changes, I have to reload the shed file. I can do that by right-clicking right here, choose the Reload button, and you'll see the shed appear, and when I zoom in, it now has three windows on this side and a double door. So that's basically the value of setting up a linked file in Revit. One person can be working in the shed file making their changes, another person can be working in the office building making their changes, and every so often each of these users can update their link file and see any of the changes that their colleague has made. There are currently no FAQs about Revit Architecture 2013 Essential Training. Access exercise files from a button right under the course name. Search within course videos and transcripts, and jump right to the results. Remove icons showing you already watched videos if you want to start over. Make the video wide, narrow, full-screen, or pop the player out of the page into its own window. Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.
OPCFW_CODE
PoCs, MVPs and prototypes are the very first step when startups or new businesses are conceived. But even if this is a crucial step to build new businesses, the difference between them is not always clear. It’s not that easy, because their goal and the stage they take place in is similar: they are meant to prove your idea is worth it and they take place at the very beginning of your startup path. At Corporate Lab we’re experts on building new businesses, and creating PoCs, MVPs and prototypes is part of our daily workload. Today, we analyze PoC vs MVP vs prototypes, and we share our own definition and key features. Keep reading. PoC: Definition and key features PoC stands for proof of concept, and it can also be known as proof of principle. PoC is a process that aims to demonstrate that a certain idea can be turned into a project. The key point here relies on the feasibility of the idea. Most of the time, a proof of concept is required in order to know if a project is feasible and can be developed from a technology perspective. Apart from testing the feasibility of the idea, PoC is also a useful tool to determine if it's viable financially - the result may reveal it is technologically feasible, but not profitable. While creating a proof of principle, it’s not necessary to make the whole system work, but just to draw the main points. Thanks to this design, you can plan the implementation in a longer term and be sure you minimize stoppers or failures during the development of the idea. This concept is used most often in the tech industry and it is a common concept in the startup world, but it can also be found in drug discovery, manufacturing, science and engineering sectors. Goal: Demonstrate an idea can turn into reality. Convince stakeholders about the value of your idea. Not its goal: A proof of concept doesn’t explore the market needs or potential interest in the idea. Advantages: PoCs allow you to save money and time in case the idea is not feasible at all. It also helps visualize alternatives if it is partially unfeasible. What should your proof of concept include? - Defined criteria for feasibility - Documentation of the whole development process - Evaluation component - Next steps and proposals to move forward Prototype: Definition and key features A prototype is a draft of your project. Once you have determined that your idea is feasible from a tech and financial standpoint, you can design an early draft of the final product. A product prototype is not meant to fully work, be completely designed or include all the features you have in mind. Instead, it is meant to draw an image of what your idea will look and feel once it’s finished. Prototypes are not meant to be released to the general public, but to test it among your own team, acquaintances, or beta testers. Thanks to their feedback, you can spot possible errors or improvements so your product is as neat as possible when you launch it. Goals: Test the idea is usability, functionality. Give stakeholders a rough idea of what the project will look like when it’s done. Not its goal: It’s not meant to fully work or include all its features. Advantages: Prototypes allow you to get early feedback, get first insights on the market, spot problems in advance, and reduce time and cost efforts. What should your product prototype include? - Proof of concept - Previous research on the market trends, needs and pains - A physical or digital description, including the most essential characteristics - Functional essential features - A provisional patent, if needed MVP: Definition and key features MVP stands for minimum viable product. An MVP is a first version of your product. Even though it has room for improvement and you will scale it with time (or even pivot it completely), MVPs are the first functioning version of your idea. Opposite to PoC and prototype, a minimum viable product is fully working and is delivered to the market. As its name suggests, an MVP is a product itself. It’s probably an improvable product, but it fully works, it is delivered to the market, and customers can use it as they use any other product in the market. Since a minimum viable product is so minimal, you have to be careful what features it includes. An MVP will help you know if the market is ready for your product and willing to consume it. Hence, you have to define its core value and deliver it with the most essential features. Making a difference between must-haves and nice-to-haves is key here. Once your MVP is built and launched, it is time to test its performance and get important answers so it can be successful in the long term. These tests can be oriented to fully defining your target audience, finding the right distribution channel, establishing a fair pricing, and so on and so forth. Nevertheless, all of these need to be established before building your MVP. Its later performance will help you finish defining these. Once you have these answers, you will be able to establish competitive advantages and defensibility measures that will help you not only maintain, but also scale your company. Goals: Provide the value proposition of your product. Get evidence that the product has potential and can be successful in the long term. Testing if your idea has a product-market fit. Test audiences and define target market, channels, user personas, and so on. Not its goal: This is not meant to be your final product, only a first version. By delivering and testing your MVP, you can further develop it, iterate it or even pivot if it’s not successful. Advantages: Launching an MVP before a more developed product helps you save time and resources, in case there’s no product-market fit. Plus, starting from a minimal version can help you design additional features tailored to your target needs. What should your minimum viable product include? - A previous proof of concept - A previous market research that shows minimum interest in your product, and a broad definition of your target market - The product’s core value - A fully functional set of features that deliver the core value - Defined use cases and user journey 👉 If you feel ready to build your MVP, take a look at our 9 tools to build MVPs and get your hands on it ASAP! 👈 At Corporate Lab, we're experts in building fast MVPs. We work in an agile way, focus on market validation, and achieve a quick product-market fit. If you run a company and want to include corporate innovation in your roadmap, send us a message. We can detect new business opportunities and build new businesses to complement your existing ones.
OPCFW_CODE